[00:00:21] <seiyria> yes, but here's the thing... I have something in the callback for every single one of my mongo calls to print out errors if they show up, but there are none. I just get this. I have no idea what causes it or where to even look, and my database calls, while isolated, are numerous
[00:00:48] <Boomtime> this stack trace shows your error ptining is at fault
[00:01:01] <Boomtime> an error occurred, and when you printed it you caused another error
[00:12:25] <seiyria> ^1.4.7 is what I have it set to
[00:13:58] <ferno> Hi! I'm having performance issues with MongoID (the ruby driver), and was wondering if you guys could help :D
[00:14:18] <ferno> I'm fetching all documents in a collection (7 of them), which comes out to around 2MB of JSON when delivered to a browser on the other side.
[00:14:22] <ferno> It's taking around 10 seconds :/
[00:14:46] <ferno> I realise the documents are massive, but the same query (db.model.find({})) is nearly instant in the console...
[00:16:39] <joannac> ferno: could you test a query that returns nothing? also a query that returns only 1 document?
[00:16:52] <joannac> that would at least let you know how much of that is actual latency
[00:17:09] <ferno> It's running on my local machine, and performance with a single document is bearable.
[00:17:21] <ferno> I only noticed the problem after adding many documents, and fetching all of them at once.
[00:28:32] <imachuchu> given that I have a document like: {"_id": ObjectID(A), "arr1": [{"_id": ObjectID(B), "arr2": [{"_id": ObjectId(C)}, {"_id": ObjectId(D)}]}]} how would I delete the subdocument with ObjectID(D)? I think it's with the $pull argument to update but I'm having problems with it getting down to arr2
[00:35:56] <vacho> how do I update a document without replacting it? I want to add another key and value to an existing document without erasing the current content?
[00:36:31] <jiffe> so my sh.shardCollection("mail.message_data", { "message_identifier": 1 } ) failed
[00:36:46] <jiffe> "errmsg" : "exception: splitVector command failed: { timeMillis: 141975845, errmsg: \"exception: BufBuilder attempted to grow() to 134217728 bytes, past the 64MB limit.\", code: 13548, ok: 0.0 }"
[03:16:29] <seiyria> hey Boomtime I'm looking at the error again, what do you mean there was an error printing the error? all I'm doing is "console.error e.stack if e"
[03:16:38] <seiyria> or could it be happening if there's an exception in the callback maybe?
[03:18:37] <seiyria> also if there was an error printing the error wouldn't there be a stack trace point in my code at least?
[04:44:20] <morenoh149> anyone know how to import data from mysql to mongodb using mongohub?
[07:48:19] <Rogach> Hello! I'm picking some very strange behavior in my app: in short, seems like query optimizer "bails out" on big queries, and decides to use a full table scan.
[07:48:54] <Rogach> When I send ~9000 character query, it takes 9s (collection has ~60k documents)
[07:49:49] <Rogach> When I turn of any part of the query, get a bit smaller query (~8000 chars), query takes about 10ms - since there is an index on "active" field, and that index has 40 documents with "true" value.
[07:50:42] <Rogach> What is even more confusing, is that .explain returns proper results - small times, proper index and stuff - but calling .find() on the cursor right after that hangs for seconds.
[07:51:18] <Rogach> Directly hinting on the index doesn't help either.
[07:52:24] <kali> Rogach: show us the query, the explain, one typical document
[07:54:19] <joannac> Rogach: and please pastebin it, don't paste into the channel directly
[07:54:37] <Rogach> joannac: Sure, doing that right now.
[07:58:25] <Guest44126> hi, how to fix this problem: http://pastie.org/9781623 ?
[08:09:13] <Rogach> kali: That's generated from my simple wrapper - it tries to use $and when adding two queries together, so to escape the problem with fields being overwritten if I simply add together two DBObjects.
[08:09:35] <kali> yeah, it should not the cause of the issue
[08:09:56] <kali> basically, "active", "createdBy" and "contractType" are indexable
[08:16:31] <joannac> I need to head off, might be back later
[08:18:13] <Guest44126> joannac, not authorized on config to execute command { dbhash: 1.0 }
[08:18:25] <Guest44126> should i log as the same user like for mongos?
[08:19:51] <Rogach> joannac: Here is the log: http://pastie.org/9785724
[08:20:31] <Rogach> joannac: Hm, at the end of query line it says "planSummary: IXSCAN { contractType: 1 } ntoreturn:0 ntoskip:0 nscanned:58128 nscannedObjects:58128 keyUpdates:0 numYields:5"
[08:20:53] <Rogach> joannac: But the explain for the very same query said "cursor" : "BtreeCursor active_1"!
[08:22:40] <Guest44126> joannac, http://pastie.org/9785726 here is output from each config
[08:25:31] <Rogach> Now this is wonderful - I did a hard restart of mongod, and now the problem went away. What that could be?
[08:26:43] <kali> Rogach: i can't seen anything obvious
[11:30:17] <hansmeier> i'm trying to perform a map reduce in a separate script which i load via: mongo script.js
[11:30:47] <hansmeier> i want to use underscore.js in this script. so i load it with "load('underscore-min.js');" which works perfectly fine
[11:31:22] <hansmeier> however, in my reduce function i use _.uniq(). runnign the script gives me "errmsg" : "exception: invoke failed: JS Error: TypeError: _.uniq is not a function nofile_b:3",
[11:31:50] <hansmeier> how can i make the loaded _ function available in the map-reduce scope?
[12:04:35] <jonasliljestrand> hi, how would mongodb handle a single update that sets a boolean to true for (10-500 000) documents
[12:05:15] <jonasliljestrand> im thinking of locks, execution time, better solutions?
[12:34:09] <remonvv> Hi guys. Does anyone know where I can find the specific consequences of disabling the _secondaryThrottle flag in balancing? It says what it stops doing but I'm having trouble translating that to consequences on durability/availability.
[12:51:50] <KekSi> hi -- is there a way to verify a mongodump was successful?
[12:53:09] <KekSi> a box just died during the dump and i've removed the bson file of the collection which had a different size than the rest (and had no .metadata.bson)
[12:53:22] <KekSi> and dumped the rest explicitly (all collections have the same size)
[13:32:07] <Rickky> I'm very curious if this is normal behaviour 2014-12-17T13:23:57.903+0000 [FileAllocator] done allocating datafile /data/mongo1/mongodb/<db>.52, size: 2047MB, took 32.715 secs
[13:32:35] <Rickky> Pre allocating 2 gigs takes like 4 seconds when doing it manually through DD
[13:32:40] <KekSi> yes (although that seems to take very long)
[13:33:00] <Rickky> But mongo takes 30+ seconds, which seem to cause timeouts for clients that are connceted
[13:33:46] <Rickky> KekSi: I'm not sure, this is our first mongo cluster we've set up
[13:33:56] <Rickky> it's not in production yet, we're running tests now to verify the cluster
[13:34:06] <Rickky> but 30+ seconds doesn't seem normal to me
[13:36:30] <KekSi> just checked through some logs and that doesn't seem all that extraordinary -> Thu Mar 13 06:09:00.509 [FileAllocator] done allocating datafile C:\mongodb\data\db\weather_forecast_longterm.5, size: 2047MB, took 69.226 secs
[13:43:04] <seiyria> hey guys, I'm getting a weird Incorrect Arguments from the node mongo driver. I've posted here: https://jira.mongodb.org/browse/NODE-334 but the long and short of it is I have no idea how to debug where this could be coming from, or why it's happening. Anyone have any ideas what I can do? The only stackoverflow post I can find is here: http://stackoverflow.com/questions/21832923/mongoerror-incorrect-arguments but this is pretty unhelpful since I'm not
[13:43:04] <seiyria> sure what DB call is "incorrect"
[14:00:31] <JohnnyMetroi> Hi all. I have a replicaset with 10 members. I have a slave that is reported as being secondary with optime within 3sec of primary. The issue is that it has 6M documents less in it if i do a db.<name>.count() on the primary and secondary. I also managed to find some of the documents that were on the primary and not on the secondary. Anyone interested in helping me debug?
[14:09:48] <cammellos> Hi , I am having some trouble running mongodb, can I ask for help here?
[14:47:01] <winem_> cammellos: sure, just ask your Q
[14:48:52] <Guest44126> winem_, maybe do you know how to fix this problem: http://pastie.org/9781623 ?
[14:51:58] <winem_> looks like you want to do something on a non existing collection
[14:52:39] <winem_> do you want to move a chunk to a new shard?
[15:42:11] <Guest44126> winem_, is there any way to fix it?
[16:09:44] <Sticky> hay, so I have had an issue of running out of disk space on a replset member, and the log has lots of stuff like "Can't take a write lock while out of disk space" however the optime of the member looked fine and up to date. I have fixed the disk space issue, but can I really be sure those writes were performed?
[16:10:30] <Sticky> seems odd that the optime did not start lagging when when out of disk space
[16:19:47] <agenteo> is there a CLI mongo client with some syntax highlighting?
[16:27:10] <jonasliljestrand> agenteo: have you tested mongohacker?
[16:27:50] <jonasliljestrand> it does not have syntax highlighting but it really helpful
[16:38:46] <agenteo> yeah I found it, but all I’d really like to have is a vim like highlight of my parenthesis
[16:39:06] <agenteo> I found someone suggesting using rlwrap. Has anyone tried that?
[18:51:33] <vacho> this might sounds crazy stupid..but I just intalled mongo on my ubuntu web server and i have no idea what my username/pass is? I can get onto mongo CLI by running command "mongo" and it's not asking me to authenticate
[18:53:39] <ehershey> auth is off initially until you turn it on
[18:54:25] <vacho> ehershey: that explains a lot! :) btw..I tried mongohub and it keept on crashing and didn't work that well. RoboMongo seems more solid.
[18:56:16] <kali> vacho: https://github.com/jeromelebel/MongoHub-Mac this one ? this is weird.
[18:56:37] <kali> vacho: you may want to submit an issue, jerome is very dedicated to the project
[19:00:57] <vacho> kali: http://screencast.com/t/xxGqfX6ifp3 I get those popups and then it crashed the first 3 times I used it..but it didn't crash this time
[19:04:09] <vacho> kali: what do you feel about auto generated ID's? with RDMS I am used to 1,2,3,4 eg auto increment.. are there any shortcomings to having random hashes generated
[20:01:25] <Dinq> silly n00b question: is there a serious performance hit for my mongodb server using swap space?
[20:01:49] <Dinq> 16gb physical ram, all used of course (mem mapping), and looking like 550MB of swap being used...
[20:02:40] <Dinq> 32core server with load avg of only 5-6.00, but a few basic queries seem to take longer than I hope
[20:07:17] <krisfremen> Dinq: is it mongodb that's swapping?
[20:08:15] <Dinq> good question. the server itself is not doing anything else specifically...
[20:08:28] <Dinq> let me see how to find out what's swapping...
[20:12:10] <Dinq> looks like mongod is using 240MB of the 550MB swap...
[20:17:57] <krisfremen> Dinq: I've sometimes seen my mongodb swap from time to time even when there's lots of ram available
[20:18:13] <agenteo> is there a way to run an order by FIELD in mongodb? ie. order by array field cars with this weight [‘bmw’, ‘audi’, ‘fiat’, ‘ford’]
[20:18:24] <krisfremen> and that swap usage stays the same until mongodb is restarted, so I'm not quite sure
[21:44:53] <Mmike> Hi, lads. Just a confirmatinon - I have 3node replicaset. One node dies. I'm now running on degraded 2node replica set. If another node fails (regardles of which one), my remaining node is read-only?
[21:55:11] <Mmike> joannac, if I have degraded cluster (2 units, pri and sec), and I do 'rs.stepDown()' on pri, I end up with two secondaries. How do I recover from that?
[21:55:50] <Naeblis> I populate an embedded field and provide { sort: { creationtime: -1 }, limit: 3 } as options. I later filter items out based on the values in this embedded field collection. Only, the filter's result is non-deterministic when I keep limit option, and works fine if I comment limit out.
[21:56:28] <joannac> Mmike: be more patient? or look at the logs to figure out why no primary is elected
[21:56:48] <Mmike> joannac, ack! :) patience is a virtue :)
[21:57:05] <Mmike> I got new primary, was just... hm, dunno the word :)
[21:58:33] <joannac> Naeblis: so what you're saying is, if you don't have any new writes, and you run the same query with .sort({ creationtime: -1 }).limit(3), you get different results?
[22:02:34] <Naeblis> hm the query you provided works fine joannac.
[22:02:42] <vacho> hoping someone can take a look at this and tell me why this does not update? http://pastie.org/9786880
[22:02:59] <Naeblis> I am however, providing these options to a .populate() call (mongoose). Maybe that makes a difference...
[22:03:09] <Mmike> joannac, a quick one: let's say that I ended up with only one mongodb unit in my replicaset, which is now 'secondary'. The rest of the units are disintegrated. I'm adding new node, but I can't join it to secondary. How do I convert existing secondary to become primary?
[22:07:04] <Mmike> I have healthy 3unit replicaset. Then I experience severe hw failure on two nodes, and I'm left with only one, which is 'secondary'. As I can't bring any of the dead nodes up, I need to bring new hardware. How do I convert the remaining secondary to become primary so that I can add fresh nodes to it?
[22:07:27] <Naeblis> joannac: some code, what I'm trying to do: https://bpaste.net/show/e92d7b37e22d
[22:07:33] <joannac> Mmike: what setting? there is no setting. arbiters in the config just have "arbiterOnly: true", you don't need to set priority or votes
[22:07:56] <joannac> Mmike: reconfig and remove the 2 dead nodes. with force:true
[22:58:31] <modulus^> of course i have a compound index with 5 keys
[22:59:22] <modulus^> so my find()'s are alot more narrowed down
[23:16:06] <vacho> I want to store data about TV Shows and their seasons, episodes and talents....... e.g. TV Show -> Season -> Episode -> Talent ......... I feel nesting the data would be the obvious choice? what do you guys think?
[23:22:26] <slajax> Has anyone had success with mongoose + wired tiger? AFAICT, mongoose doesn't support the new auth hash yet. Can anyone confirm?
[23:23:52] <mod^> what's the default if not wired tiger?
[23:31:07] <joannac> sokr: i think expecting mongoose to support a version that isn't even GA yet is asking a bit much. You might get a better answer in #mongoosejs though