PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 11th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:16:49] <grandy> hello, i'm using mongoose and one of my models has this schema def: {type: mongoose.Schema.Types.ObjectId, ref: "User" },
[00:17:09] <grandy> wondering if there is a way to populate based on that, or if it would have been necessary to define the ref in the schema
[00:19:48] <joannac> grandy: you might want to also try the mongoose channel
[00:20:12] <grandy> joannac: is it just #mongoose ?
[00:20:42] <joannac> grandy: https://gitter.im/Automattic/mongoose
[00:20:57] <grandy> joannac: thanks :)
[00:20:59] <joannac> grandy: as linked to here http://mongoosejs.com/ under "support"
[01:14:23] <ThePrimeMedian> i have a large dataset, and I need to regex search on the name field. However, I need to sort on the status, and then name field. If I do not sort, the query is really quick. But if I put the sort in, it take > 1m -- any help would be appreciated.
[01:22:07] <joshua> You might want to have the sort fields indexed
[01:22:45] <joshua> Maybe even a compound index based on the whole query if you do it that often.
[01:29:51] <ThePrimeMedian> joshua: i have the sort fields indexed.
[01:30:27] <joshua> did you run the query with .explain() to get some more info as to whats going on?
[01:31:00] <ThePrimeMedian> will do it now.
[01:32:38] <joshua> It has been a while since I did that kind of thing so might not be able to explain (har har) what all the results mean, but I think its a good idea in general to see what is going on.
[01:35:52] <ThePrimeMedian> the first cursor of the explain says it used the "sort" index -- to be honest, i dont know how to read it.
[01:44:00] <acidjazz> is there a simple way in mongo to search an object in a documents keys?
[01:48:08] <cheeser> what?
[01:49:58] <joannac> acidjazz: pastebin your example please
[01:50:50] <acidjazz> well its simple, instead of an array and $in, i have an object and its keys
[01:51:32] <acidjazz> so say ['joannac','cheeser','acidjazz'] instread is joannac: {stuff}, cheeser: {stuff}, acidjazz: {stuff}
[01:51:59] <acidjazz> seems like re-structuring is the ultimate answer
[01:52:03] <acidjazz> http://stackoverflow.com/questions/7794914/indexing-of-object-keys-in-mongodb
[01:53:50] <joannac> acidjazz: and what do your queries look like?
[01:54:25] <acidjazz> I don't have a query, I'm asking if this is possible before I build one
[01:54:38] <joannac> if what is possible?
[01:55:12] <joannac> having a document like that? sure, but it's probably going to perform badly
[01:55:14] <acidjazz> 9 lines up, search an documents' objects' keys
[01:55:20] <joannac> because you're using values as keys
[01:56:01] <joannac> so you want to do a query like "document that has the key 'joanna'"
[01:56:08] <joannac> ?
[01:56:40] <acidjazz> yea but that stackoverflow link is making me re-think my schema
[01:56:44] <joannac> no
[01:56:48] <cheeser> $exist
[01:56:57] <acidjazz> so maybe my question wouldnt exist if i re-structured it properly
[01:57:38] <joannac> correct
[01:58:01] <joannac> next time, it would be appreciated if you formulated an actual example
[03:27:59] <louie_louiie> has anyone used R with MongoDB?
[06:18:33] <sigfoo> Excuse me if this is a little off-topic. I have a question about working with Mongo with the node.js native driver
[06:18:36] <sigfoo> 02:12 < sigfoo> Hello all. I'm working on a web app in node and I'm a little new to the paradigm. My app has a MongoDB backend and I'm working with the native Mongo node driver. Should I be instantiating a single connection to the DB and serving all requests from that connection? Or should I be calling MongoClient.connect once for every HTTP request?
[06:39:07] <Boomtime> hi sigfoo
[06:40:01] <Boomtime> you should call .connect only once and use that object for everything
[06:40:38] <Boomtime> note that if you use the nodejs clustering module, you will end up with lots of these anyway because the cluster module instantiates multiple copies of the runtime on your behalf
[06:41:38] <sigfoo> Boomtime: I see, thank you
[06:41:52] <sigfoo> Guess I have to do a little refactoring
[06:42:11] <mskalick> Hello everybody, is there any option how to run rs.remove() on the server-site before the server is terminated? Thank you very much, Marek
[06:43:18] <joannac> mskalick: um, no?
[06:43:31] <joannac> for one thing, you would need to run it on the current primary
[06:43:39] <joannac> not the server that's about to be removed
[06:44:10] <mskalick> joannac: But could it be done automatically? (without connecting with mongo shell)
[06:44:51] <joannac> mskalick: that doesn't make sense.
[06:45:12] <Boomtime> mskalick: do you mean you want to run a custom command whenever the server is shutdown?
[06:45:29] <mskalick> Boomtime: yes
[06:46:35] <Boomtime> it sounds like you need a stop script (not mongodb), but perhaps if you explain why you need this we'll understand and be able to consider alternatives
[06:49:09] <mskalick> jonnac Boomtime: I want to have working replication in Docker container - in this container is only one process (mongod). So have running for example 5 containers~mongod. And I would like to know if it is possible to remove one mongod server when the container is going to be killed. Without any other control script (which must run in new separate container)
[06:50:08] <mskalick> So it would be nice to be able to run rs.remove() when the mongod is going to be killed...
[06:56:10] <Boomtime> mskalick: i don't think you understand the point of replica-sets
[06:56:46] <Boomtime> if a node needed to pre-announce it's own failure then that isn't a very redundant system
[06:57:46] <mskalick> Boomtime: why?... not the failture! It is the part of scaling ~ to be able to "automatically" add or remove shards...
[06:57:48] <Boomtime> if you have a correctly configured replica-set, you can happily lose a member through any cause, and the rest will plow on without that one
[06:58:30] <Boomtime> rs.remove = replica-set member control, nothing to do with sharding
[06:59:12] <mskalick> Boomtime: sry, not "shards". I thought members of replica-set...
[07:00:22] <Boomtime> ok, you also used the word "scaling".. replica-sets are intended to provide high-availability, sharding is the preferred method to scale up
[07:00:45] <Boomtime> regardless, the need for any given member to pre-announce it's own demise is fundamentally the wrong way around
[07:02:18] <Boomtime> what is controlling the addition of new members?
[07:02:26] <mskalick> Boomtime: "you can happily lose a member through any cause" -> And it is not problem when the loose is intended? (I know that the mongod will never come online again) ~ so after some time of running there could be a lot of never active members...
[07:02:51] <Boomtime> fine, you remove those members via the remaining members
[07:04:06] <Boomtime> in any case, you clearly have some control system that is creating/destroying these members - why can't that process perform the maintenance?
[07:04:27] <mskalick> Boomtime: Adding is no the problem, bacause in the container you run mongod in the background, connect with mongo shell + add to the replica-set. Then mongod is stoped and there is "exec mongod" (to properly receive signals,...). So with execed mongod it not possible to run rs.remove()
[07:05:25] <joannac> okay, so in the container, connect with mongo shell and run "rs.remove()
[07:05:25] <Boomtime> : "bacause in the container you run mongod in the background" <- what is doing this?
[07:05:38] <mskalick> remaining members also have only one process mongod. Without any shell,...
[07:06:10] <mskalick> in the linux shell you run "mongod <config options> &"
[07:06:17] <joannac> mskalick: what do you mean, without any shell?
[07:07:08] <mskalick> joannac: sry, without any linux shell,... There is only one process (mongod)
[07:09:12] <Boomtime> mskalick: mongodb is a database, not a linux process control system - if you need to do tasks related to the starting and stopping of containers, you need a process control system
[07:11:59] <mskalick> Boomtime: I know. I only wanted to know if there could be some hooks to run on system terminating...
[07:12:40] <joannac> If you want to write a script that will run on system termination, sure
[07:13:15] <joannac> there's nothing stopping you
[07:13:40] <mskalick> And script running in mongod?
[07:16:36] <joannac> mskalick: define "in mongod"
[07:17:02] <joannac> when mongod gets the "shutdown signal", you want something special to happen?
[07:17:51] <mskalick> joannac: yes
[07:18:17] <joannac> mskalick: and the something special is "connect to the current primary and remove the member that just got "shutdown""?
[07:19:04] <joannac> if so, no
[07:19:21] <joannac> is it completely inapprorpiate for the mongod server to do that
[07:20:12] <joannac> if you are so certain that you are shutting down a server and never bringing it back up, you can engineer your own way to call rs.remove()
[07:20:17] <mskalick> yes. Thought to be able to register function which will be executed on the mongod shutdown. And in this function connect to the master and remove the mongod
[07:20:36] <joannac> the server cannot tell the difference between "I'm shutting down for some maintenance" and "I'm shutting down and never coming back"
[07:22:15] <mskalick> and without trying to distinguish these two things it is possible?
[07:23:24] <joannac> No
[07:23:53] <joannac> I'm saying it's not a reasonable request
[07:34:13] <mskalick> Thank you guys for help
[09:42:42] <benjick> Hi. I'm trying to update an object nested in an array like this: https://gist.github.com/benjick/14ce6f8ea68a648ee649
[09:42:56] <benjick> I can't figure out the correct syntax for it
[09:43:25] <benjick> omg..
[09:43:54] <benjick> I hate when you paste something, look at it again and find out the problem. So sorry
[11:08:01] <_MMM_> hi :)
[11:09:07] <_MMM_> I have a collection of documents with a "data" field and I want a query result to only return the contents of the documents' data fields. Is that possible?
[11:09:48] <_MMM_> I tried the fields parameter of find() but that include the "data" fieldname itself for every document
[11:12:04] <bartzy> Hello
[11:12:04] <dddh> _MMM: var c = db.table.find( { } ).sort( { data : 1 } ); c.forEach( function( doc ) { print( doc.data ) } );
[11:12:57] <bartzy> I have a collection of posts by users. The user id field is called uid. I want to get a list of all the post count per user, like this: Count of all users that posted 1 time or more, 2 times or more, 3 times or more.. etc …
[11:13:13] <bartzy> db.posts.aggregate([ {$match: { created_at : { $gte: ISODate("2015-07-11T00:00:00.000Z") } } }, {$group: { _id:"$uid", count: { $sum: 1 } } }, {$group: {_id:"$count", count: {$sum : 1} } }, {$sort: {_id: 1} }]);
[11:13:55] <bartzy> This gives me only exact counts - meaning the _id would be the number of posts, and the count field would be the number of users that posted this number of posts. But I don’t want exact equality, I want to get greater or equal.
[11:14:11] <bartzy> I looked at the docs, didn’t get how can I do that
[11:14:45] <_MMM_> dddh: well, yes but I was wondering if mongodb doesnt have some option to retrieve only the contents of subdocuments in the first place since that would probably be more elegant and better performing
[11:15:11] <ring3> hi
[11:16:11] <ring3> how can i specifi WHERE to insert?
[11:16:25] <ring3> insert() command doesn't have a where parameter
[11:16:55] <_MMM_> ring3: I dont think collections provide a order for storage
[11:19:09] <_MMM_> ring3: but if you want to have an order on your documents you can use a key to order by and put an index on that key
[11:19:42] <ring3> i need to insert a new contact for existant customer
[11:19:55] <ring3> every catalog.insert() is a new customer
[11:20:39] <_MMM_> then it's an update, not a insert, right?
[11:20:41] <ring3> i need to do catalog.find({customer:'acme'}, function(cus) { cus.contact.append ...
[11:20:59] <ring3> but not overwrite all the customers array
[11:21:02] <_MMM_> there's also a findAndModify
[11:21:08] <ring3> hum
[11:21:58] <ring3> looks fine :)
[11:22:03] <ring3> for append is $add ?
[11:23:23] <_MMM_> $push
[11:23:25] <ring3> $push
[11:23:27] <ring3> ok
[11:23:29] <ring3> :)
[11:23:32] <bartzy> anyone? :)
[11:24:41] <ring3> findAndModify({query: {customer:'acme6'}, update: { $push: 'c1' }});
[11:25:10] <ring3> i have an error: $push invalid identifier
[11:26:03] <ring3> ah ok: findAndModify({query: {customer:'acme6'}, update: { $push: {contacts: 'c1'} }});
[11:28:48] <joannac> why are you using findandmodify?
[11:33:35] <_MMM_> bartzy: wouldn't it be more efficient to do the >= grouping after retrieving the query since many documents would end up in a lot of groups of the posters usually have more than 1-2 posts only?
[11:33:54] <_MMM_> *if the posters
[11:33:59] <bartzy> _MMM_: But how can I achieve this… if we ignore efficiency
[11:36:55] <_MMM_> not sure how to do it exactly but I bet you need to make use of the pipelining here
[11:46:12] <aps> Hi all. Is it true that if my data and journal files are on the same volume, I don't need to fsyncLock( ) the secondary for take snapshot backup on Amazon EBS?
[11:46:29] <aps> s/take/taking
[11:51:19] <deathanchor> aps: I always stop mongod on my secondary and do a snapshot
[11:51:46] <deathanchor> I think otherwise you might have to do a repairDatabase if it is in a bad state.
[11:51:53] <deathanchor> I'm not sure
[11:57:44] <aps> deathanchor: thanks, do you do this manually or using scripts? I'm asking 'cause I'm not sure if I should rely on scripts
[11:58:45] <aps> deathanchor: one more thing, do you just db.fsyncLock( ) or do you stop the mongo service itself?
[12:01:43] <deathanchor> I use scripts, they stop mongod, snapshot, start mongod
[12:02:02] <deathanchor> in house writtne
[12:02:14] <deathanchor> sorry can't share it.
[12:05:23] <aps> deathanchor: that's fine :)
[12:18:59] <deathanchor> I believe MMS might have some functionality for managing that
[12:19:02] <deathanchor> I'm not sure though
[12:19:06] <_MMM_> hmm, in node.js col.aggregate is returning undefined for me oO
[12:21:04] <StephenLynx> mmm
[12:21:12] <StephenLynx> show me your code
[12:22:17] <StephenLynx> _MMM_
[12:23:08] <_MMM_> var cursor = OHLC.aggregate([{ $sort: { 'data.timestamp': 1 } }, { $project: { timestamp: '$data.timestamp', open: '$data.open', high: '$data.high', low: '$data.low', close: '$data.close', volume: '$data.volume', _id: 0 } }]);
[12:26:36] <_MMM_> however, if I give it a callback that callback gets err=null and a valid result
[12:27:03] <_MMM_> if it was a syntax error, I would expect the callback to receive an error
[12:42:07] <StephenLynx> I always uses callbacks
[12:42:11] <StephenLynx> afaik promises are slow.
[12:42:24] <StephenLynx> what is the callback you are trying to provide?
[12:42:31] <StephenLynx> aaah
[12:42:37] <StephenLynx> so the callback is fine?
[12:42:41] <StephenLynx> use a callback then.
[12:45:47] <_MMM_> StephenLynx: does the callback give the same result as cursor.toArray's callback?
[12:46:04] <StephenLynx> yes.
[12:46:10] <_MMM_> ah, nice
[12:46:24] <StephenLynx> afaik, aggregate returns an aggregate cursor.
[12:46:38] <StephenLynx> which behaves similarly to a find cursor.
[12:46:49] <StephenLynx> a find cursor would also return the result of toArray
[12:47:01] <StephenLynx> if you were to pass a callback directly
[12:47:45] <_MMM_> ok, thanks :)
[13:07:54] <deathanchor> how do I filter for a sub doc? { some : [ { want : "this", dont : "want" } ] } so that I only see the want field of the first array subdoc?
[13:08:25] <StephenLynx> 'some.want':1
[13:08:30] <deathanchor> crap
[13:08:35] <deathanchor> I kept adding [0] :D
[13:08:41] <StephenLynx> oh
[13:08:43] <StephenLynx> hold on
[13:08:50] <StephenLynx> its an array
[13:08:59] <StephenLynx> I think it might work, though
[13:10:25] <deathanchor> it does work without the [0]
[13:10:36] <sterichards> I’m trying to add a field ‘code’ with the value of ‘NA’ where ‘code’ doesn’t exist. I’m using the following, but it just returns 3 dots (…) via CLI - db.object.category.update({"code" : { "$exists" : false }, {$set: {"code":"NA"}}, false, true)
[13:10:40] <deathanchor> some.want : 1 works... some.[0].want doesn't
[13:11:39] <sterichards> Missing }
[13:11:40] <sterichards> Dog!
[13:11:42] <sterichards> Doh!
[13:13:18] <StephenLynx> yeah, if it "returns" ... its because the command is incomplete.
[13:20:12] <jhertz> Hi, I am really new to MongoDB and working in a project were it is suggested that we use it as a (query) cache for an oracle database. Is this a good use of MongoDB, does someone have a link to an article about this approach?
[13:21:08] <StephenLynx> I wouldn't do that, personally.
[13:21:36] <StephenLynx> if you just want a cache on top of the main database, there are tools meant specifically to be used as a cache, such as
[13:21:42] <StephenLynx> that one that imgur uses
[13:21:46] <StephenLynx> redis
[13:22:27] <StephenLynx> word of advice with redis: I heard its performance degrades drastically when your dataset is larger than your RAM
[13:23:06] <jhertz> I will read up on redis, seems worth while
[13:24:31] <StephenLynx> yeah, I heard it makes an excellent cache, much more than mongo.
[13:25:12] <jhertz> to me it sounds like we are trying to use the wrong tool for getting the job done, my hunch, which might be completely wrong is that MongoDB is not at all designed for this task, and therefore it probably does not makes sense to use it to solve this problem.
[13:26:26] <StephenLynx> mongo is not bad as a cache, is just not the best thing for it out there.
[13:30:12] <jhertz> then I think you can argue that even if it is not the best thing, it might be good enough and since there is knowledge on how to use it inside the team, it makes sense to do so. Anyway thanks for the incite, it helps me understanding this a bit better and now I just have to do my homework :)
[13:34:55] <StephenLynx> also
[13:35:06] <StephenLynx> see if you can replace oracle entirely by mongo.
[13:35:20] <StephenLynx> if you don't really depend on the relational capabilities of it.
[13:35:40] <StephenLynx> that would solve the issue in a more elegant manner.
[13:40:12] <deathanchor> StephenLynx: yep did that in house here. RDB was so slow for some operations we started cloning the data over to mongodb for speedy lookups
[13:47:10] <kaseano> hi, I was wondering if it was possible to update all the documents using a different value/column from each document
[13:47:19] <kaseano> ex like a lowercase version of "name"
[13:47:26] <StephenLynx> yes.
[13:47:31] <StephenLynx> with bulkwrite.
[13:47:41] <kaseano> perfect thanks StephenLynx I'll look that up!
[13:50:52] <kaseano> StephenLynx: even with bulk write it looks like you have to use the same/static value for every document, instead of using the values on the document being updated
[13:51:11] <StephenLynx> yes.
[13:51:20] <StephenLynx> afaik you can't make a relative update.
[13:51:30] <kaseano> oh ok bummer
[14:18:29] <frankydp> Can anyone point me in the right direction to fix an index that needs to be unique, but that is not? Or is find update my best bet?
[14:28:04] <cheeser> so your data contains dups?
[14:42:19] <StephenLynx> run an aggregate and get the duplicates, then run a bulkwrite and make them unique
[15:16:55] <ring3> can you recommend a http & https wrapper like python requests?
[15:29:59] <frankydp> Thanks @stephenlynx
[16:27:45] <dijack> I have a question, I want to use mongo --eval from the command line to setup a cluster
[16:27:56] <dijack> well I'm trying to get access to the rs.conf
[16:28:03] <dijack> but, i'm not sure if --eval can do parameters
[16:28:06] <dijack> is this possible?
[16:33:51] <dijack> i'm trying to do this from the command line ->
[16:33:52] <dijack> function mongocolocation { echo "Executing mongo cluster setup" ~/applications/mongo/bin/mongo commandCenter --eval "cfg=rs.conf() && cfg.members[$1].priority=0 && rs.reconfig(cfg)" }
[16:34:08] <dijack> as you can see, I"m doing it with &&
[16:34:12] <dijack> but, doesn't seem to work
[16:34:51] <ehershey> do you get an error?
[16:35:08] <ehershey> I don't think your problem is with the shell or javascript
[16:35:21] <ehershey> or maybe it is and it's not with parameters
[16:35:23] <ehershey> that $1 looks ok
[16:49:04] <dijack> ehershey: I think i'm missing the ";"
[16:49:06] <dijack> instead of &&
[16:49:16] <dijack> using &&, gives me cfg is not defined
[16:49:22] <dijack> with ";"...it works
[16:55:46] <dijack> is there anyway I can assign mongo eval javascript value to a command line variable?
[16:58:35] <StephenLynx> create a script that does that.
[16:58:55] <StephenLynx> oh hold on
[16:58:59] <StephenLynx> I misunderstood that
[17:01:12] <dijack> I want to get cfg.members.length and do a comparison between my bash command line argument and that value
[17:03:08] <StephenLynx> IMO, you will have to parse the content returned.
[17:03:29] <StephenLynx> because the terminal driver returns only strings, afaik
[17:04:18] <ehershey> oh dijack right on
[17:04:26] <ehershey> re ;/&&
[17:24:06] <dijack> StephenLynx: how could I do that?
[17:24:22] <StephenLynx> I don't know.
[17:24:29] <StephenLynx> you would have to get the string back in your script
[17:24:48] <StephenLynx> and then parse it using whatever tool you have to transform the string into an object.
[17:25:15] <StephenLynx> unless you just want the string.
[17:31:41] <dijack> yeah I'm just saying how could i get that value to my bash script
[17:31:46] <dijack> that's the point i'm stuck @ StephenLynx
[17:31:54] <StephenLynx> ah
[17:32:04] <StephenLynx> variable = command I guess
[17:32:09] <StephenLynx> not that sharp in bash
[17:32:51] <StephenLynx> bash is funny. some stuff that you deem simple are actually complicated and vice-versa in bash.
[17:33:51] <happiness_color> @dijack - are you sure whatever you are doing in bash, can't be done in a javascript and run against mongo?...
[17:34:04] <dijack> it's bash
[19:25:22] <liviud> quick question, can you have a collection thats not sharded and one that is sharded in the same database ?
[19:25:56] <liviud> if so what happens when your app talks to a router and asks for a specific collection which is not sharded
[19:26:20] <cheeser> yes
[19:26:30] <cheeser> it lives solely on the shard primary
[19:26:57] <liviud> cheeser: that's possible only on the shard primary server ?
[19:33:02] <cheeser> hrm?
[19:42:53] <windsurf_> Does anyone know a good GUI for copying my local database or selected collections to a staging server? drag and drop perhaps?
[19:43:59] <windsurf_> robomongo
[19:44:00] <windsurf_> ?
[19:45:20] <StephenLynx> I never heard of a good GUI tool for mongo.
[19:58:41] <ozadari> hi
[19:59:15] <ozadari> Someone know how can i edit the email validatin in mongo ops manager?
[19:59:41] <windsurf_> StephenLynx I like MongoHub but it doesn't do copy.
[20:00:02] <windsurf_> What's the easiest way to copy collections from my local server to a remote server? i'm open to CLI
[20:00:26] <StephenLynx> afaik, you will have to do that on application code.
[20:00:38] <StephenLynx> I don't think mongo will be able to open a connection to a second database.
[20:01:18] <windsurf_> copyDatabase maybe
[20:02:22] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.copyTo/
[20:02:25] <cheeser> http://docs.mongodb.org/manual/reference/method/db.cloneCollection/
[20:06:25] <windsurf_> thx
[20:25:46] <atomicb0mb> Hi!! I'm newbie. I did db.createCollection('coll', { name: "test" });
[20:26:02] <atomicb0mb> and then i did db.collection.findOne('coll'); and i got null =( Why is that
[20:26:25] <StephenLynx> you don't need to create collections
[20:26:36] <StephenLynx> just use db.coll.insert({stuff})
[20:26:47] <StephenLynx> and your find is wrong
[20:27:05] <StephenLynx> it should be db.coll.findOne()
[20:27:12] <StephenLynx> or find()
[20:27:55] <atomicb0mb> StephenLynx thank you so much!!! Do u know the mongodb university? I'm trying to learn from there
[20:28:19] <StephenLynx> never heard about
[20:28:24] <StephenLynx> I just use the official documentation
[20:28:39] <StephenLynx> and random bits of information I see
[20:30:41] <atomicb0mb> it worked just fine ! Thank you. :)
[21:31:03] <Astral303> hi, does anyone have any insight as to how stable or safe 3.0.6-rc0 is vs 3.0.5?
[21:31:13] <blizzow> We're inserting ~100mil records a day via 3 servers connecting to 3 mongos instances in front of a sharded cluster with 3 replica sets. Our mongos instances are melting their own faces off. Assuming an average 100kb document size with usually 20 fields, does anyone have recommendations on the number of connections and batch size for the inserts into our routers??
[21:31:24] <blizzow> routers == mongos
[21:37:06] <blizzow> our mongos instances are typically running with 8GB RAM x 8CPU cores.
[21:58:21] <joannac> Astral303: it's a release candidate. check the changelog and see if there's anything in there that you need?
[21:58:49] <Astral303> It has critical fixes for 3.0.5 bugs
[21:58:51] <joannac> blizzow: get more mongoS processes, maybe?
[21:59:24] <Astral303> I am faced with upgrading from 3.0.2 right now, and i am debating whether to do 3.0.4 or 3.0.6-rc0
[21:59:33] <ehershey> 3.0.6-rc0
[21:59:42] <ehershey> but wait a day or two if you can
[21:59:49] <Astral303> I'm typically hesitant to go with rc0 releases, as sometimes there are banal regressions in WT that don't get caught until rc0 has been around in a few hands
[22:00:21] <Astral303> ok, thanks
[22:00:25] <ehershey> we're running 3.0.6-rc0 with WT on a medium-sized internal system
[22:00:31] <ehershey> if it makes a difference
[22:00:39] <Astral303> that's a good data point. all okay so far?
[22:00:47] <ehershey> yeah
[22:00:53] <ehershey> but it's only been a few hours
[22:00:59] <ehershey> :)
[22:01:05] <Astral303> haha yeah, that's fairly green still
[22:01:10] <Astral303> thanks for the data points though
[22:01:17] <ehershey> no problem
[22:07:02] <Astral303> blizzow, what does "melting their own faces off" mean for mongos instances? are you maxing out CPU or network?
[22:07:26] <blizzow> joannac: I'm currently deploying some mongos processes on a few servers to try and reduce the load but the mongos only started face-melting after upgrading to 3.0.x. Astral303: Load and CPU.
[22:07:32] <blizzow> Astral303: you in CO?
[22:07:41] <Astral303> Astral303, nope, just an electronic music fan
[22:07:50] <Astral303> oops i meant, blizzow with that last reply
[22:07:55] <blizzow> ahh.
[22:09:51] <Astral303> blizzow, if you are pegging CPU after upgrading to 3.0 on an otherwise identical setup with 2.6 (load-wise and cluster-topology-wise), then you're likely hitting an issue of some kind or a performance regression. i would imagine the storage backend would have no difference on mongos performance. it might be worth doing some stack traces or whatever the techniques are for starting performance diagnostics
[22:10:53] <Astral303> it's also possible that if you changed your storage engine to WT, then you might have considerably faster insert rates that are stressing mongos simply due to increased throughput
[22:11:10] <Astral303> so i guess, how many variables changed with the 3.0 upgrade?
[22:28:05] <blizzow> Astral303: We didn't move to WT. I'm wondering if the older java and spark drivers we use to communicate to mongo are bashing our mongos instances.
[22:53:17] <Astral303> blizzow, did you notice any decrease in your insertion throughput or latency? are the mongos processes pegged or the java drivers talking to mongo?
[23:31:48] <EllisTAA> hello. im trying to insert a record into my mongodb, but when i added this line: https://github.com/ellismarte/ellis.io/blob/master/server.js#L71-L75, i got this error message: https://gist.github.com/ellismarte/5a1be11878d3db879a39. it says db is not defined but according to this documentation that’s how you write it: http://docs.mongodb.org/manual/tutorial/insert-documents/
[23:41:21] <Boomtime> EllisTAA: the link you quote from is for the mongo shell, but you are clearly using nodejs
[23:41:32] <EllisTAA> boomtime: yes i am using node
[23:41:49] <EllisTAA> so maybe i should have looked up the mongoose documentation
[23:42:57] <Boomtime> if that is what you're using, then you'll need to use that
[23:43:10] <Boomtime> (hooray, for tautologies)
[23:43:21] <EllisTAA> thanks. ha what tautology?
[23:46:17] <Boomtime> nevermind, you should just refer to the mongoose docs for the methods you need