PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 17th of December, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:21] <seiyria> yes, but here's the thing... I have something in the callback for every single one of my mongo calls to print out errors if they show up, but there are none. I just get this. I have no idea what causes it or where to even look, and my database calls, while isolated, are numerous
[00:00:48] <Boomtime> this stack trace shows your error ptining is at fault
[00:01:01] <Boomtime> an error occurred, and when you printed it you caused another error
[00:01:10] <seiyria> what
[00:02:13] <seiyria> how does that even happen lol
[00:04:54] <Boomtime> what node driver version is this?
[00:05:25] <Boomtime> i can't find any of that stuff in the current version, but i don't know node.js very well so i might just be blind
[00:12:10] <seiyria> not sure, one sec
[00:12:25] <seiyria> ^1.4.7 is what I have it set to
[00:13:58] <ferno> Hi! I'm having performance issues with MongoID (the ruby driver), and was wondering if you guys could help :D
[00:14:18] <ferno> I'm fetching all documents in a collection (7 of them), which comes out to around 2MB of JSON when delivered to a browser on the other side.
[00:14:22] <ferno> It's taking around 10 seconds :/
[00:14:46] <ferno> I realise the documents are massive, but the same query (db.model.find({})) is nearly instant in the console...
[00:16:39] <joannac> ferno: could you test a query that returns nothing? also a query that returns only 1 document?
[00:16:52] <joannac> that would at least let you know how much of that is actual latency
[00:17:09] <ferno> It's running on my local machine, and performance with a single document is bearable.
[00:17:21] <ferno> I only noticed the problem after adding many documents, and fetching all of them at once.
[00:28:32] <imachuchu> given that I have a document like: {"_id": ObjectID(A), "arr1": [{"_id": ObjectID(B), "arr2": [{"_id": ObjectId(C)}, {"_id": ObjectId(D)}]}]} how would I delete the subdocument with ObjectID(D)? I think it's with the $pull argument to update but I'm having problems with it getting down to arr2
[00:35:56] <vacho> how do I update a document without replacting it? I want to add another key and value to an existing document without erasing the current content?
[00:36:31] <jiffe> so my sh.shardCollection("mail.message_data", { "message_identifier": 1 } ) failed
[00:36:46] <jiffe> "errmsg" : "exception: splitVector command failed: { timeMillis: 141975845, errmsg: \"exception: BufBuilder attempted to grow() to 134217728 bytes, past the 64MB limit.\", code: 13548, ok: 0.0 }"
[03:16:29] <seiyria> hey Boomtime I'm looking at the error again, what do you mean there was an error printing the error? all I'm doing is "console.error e.stack if e"
[03:16:38] <seiyria> or could it be happening if there's an exception in the callback maybe?
[03:18:37] <seiyria> also if there was an error printing the error wouldn't there be a stack trace point in my code at least?
[04:44:20] <morenoh149> anyone know how to import data from mysql to mongodb using mongohub?
[04:55:27] <morenoh149> anyone?
[07:48:19] <Rogach> Hello! I'm picking some very strange behavior in my app: in short, seems like query optimizer "bails out" on big queries, and decides to use a full table scan.
[07:48:54] <Rogach> When I send ~9000 character query, it takes 9s (collection has ~60k documents)
[07:49:49] <Rogach> When I turn of any part of the query, get a bit smaller query (~8000 chars), query takes about 10ms - since there is an index on "active" field, and that index has 40 documents with "true" value.
[07:50:42] <Rogach> What is even more confusing, is that .explain returns proper results - small times, proper index and stuff - but calling .find() on the cursor right after that hangs for seconds.
[07:51:18] <Rogach> Directly hinting on the index doesn't help either.
[07:52:24] <kali> Rogach: show us the query, the explain, one typical document
[07:54:19] <joannac> Rogach: and please pastebin it, don't paste into the channel directly
[07:54:37] <Rogach> joannac: Sure, doing that right now.
[07:58:25] <Guest44126> hi, how to fix this problem: http://pastie.org/9781623 ?
[07:58:50] <joannac> what
[07:59:12] <joannac> what version?
[08:00:35] <Rogach> kali: query: http://pastie.org/9785710 explain: http://pastie.org/9785703 example document: http://pastie.org/9785702
[08:06:27] <kali> pfiou, that's a ugly query
[08:07:20] <kali> you're aware you're abusing $and ?
[08:07:29] <kali> should not be a problem, though
[08:09:13] <Rogach> kali: That's generated from my simple wrapper - it tries to use $and when adding two queries together, so to escape the problem with fields being overwritten if I simply add together two DBObjects.
[08:09:35] <kali> yeah, it should not the cause of the issue
[08:09:56] <kali> basically, "active", "createdBy" and "contractType" are indexable
[08:09:58] <kali> the rest is not
[08:11:03] <kali> you have an index on active and createdBy, it may help to have a composite index on the three fields
[08:11:47] <Rogach> kali: I can live even with an index on `active` - it nicely trims down those 60k objects to just 40.
[08:12:01] <kali> really ? ha.
[08:12:10] <kali> I assumed most docs were active
[08:12:12] <Rogach> kali: Yes :)
[08:12:32] <Rogach> kali: They problem is, they don't. And query optimizer plays tricks on me :(
[08:13:56] <Guest44126> joannac, 2.6.5
[08:14:10] <joannac> Rogach: pastebin the logline for this?
[08:14:30] <Rogach> joannac: Sure, let me find the logs.
[08:14:44] <joannac> Guest44126: use confg; db.runCommand("dbhash")
[08:14:50] <joannac> oh all 3 config servers?
[08:14:54] <joannac> on*
[08:15:03] <joannac> (then pastebin the output)
[08:15:16] <joannac> (may be resource intensive)
[08:15:21] <Guest44126> not from mongos?
[08:15:50] <joannac> nope, config servers directly
[08:16:08] <Guest44126> ok i'll try
[08:16:31] <joannac> I need to head off, might be back later
[08:18:13] <Guest44126> joannac, not authorized on config to execute command { dbhash: 1.0 }
[08:18:25] <Guest44126> should i log as the same user like for mongos?
[08:19:51] <Rogach> joannac: Here is the log: http://pastie.org/9785724
[08:20:31] <Rogach> joannac: Hm, at the end of query line it says "planSummary: IXSCAN { contractType: 1 } ntoreturn:0 ntoskip:0 nscanned:58128 nscannedObjects:58128 keyUpdates:0 numYields:5"
[08:20:53] <Rogach> joannac: But the explain for the very same query said "cursor" : "BtreeCursor active_1"!
[08:22:40] <Guest44126> joannac, http://pastie.org/9785726 here is output from each config
[08:25:31] <Rogach> Now this is wonderful - I did a hard restart of mongod, and now the problem went away. What that could be?
[08:26:43] <kali> Rogach: i can't seen anything obvious
[08:26:49] <kali> ha.
[08:27:58] <Rogach> kali: Looking at that log, I see something about "7 connections now open"
[08:28:21] <kali> you can ignore that
[08:28:26] <kali> i'm sorry, i have to go
[08:28:30] <Rogach> kali: Nothing is using mongodb on my machine (except for the app), and the app was down.
[08:28:36] <Rogach> kali: Thanks for the help!
[08:29:13] <Rogach> joannac: Could something "lock" the index and disallow it's usage? Maybe some hanging connection?
[09:00:27] <chetandhembre> what is this file do 90-nproc.conf ?
[09:00:44] <chetandhembre> can any one explain me purpose of 90-nproc.conf ?
[09:04:44] <Rogach> joannac: It's back again :( queries again take unreasonable time, and wrong index is selected (I looked at the logs)
[09:08:18] <krion> hello
[09:08:27] <krion> i got a question between mongodb and php :)
[09:08:39] <krion> the thing is i have created a user for a specfici database
[09:09:03] <krion> my customer try to connect using php, but the stuff do a "connection" then select a database
[09:09:38] <krion> this fail since the user doesn't have access to the admin database
[09:10:03] <krion> anyone know how to do it (try to read the doc, not a dev, didn't get it)
[09:10:26] <krion> either php way or mongo way
[09:30:41] <krion> should i allow the user to have read access on admin database ?
[09:53:39] <Rogach> joannac: Seems I found the problem. .count() doesn't care about my index hint - thus the awful query times.
[11:29:54] <hansmeier> hi folks
[11:30:17] <hansmeier> i'm trying to perform a map reduce in a separate script which i load via: mongo script.js
[11:30:47] <hansmeier> i want to use underscore.js in this script. so i load it with "load('underscore-min.js');" which works perfectly fine
[11:31:22] <hansmeier> however, in my reduce function i use _.uniq(). runnign the script gives me "errmsg" : "exception: invoke failed: JS Error: TypeError: _.uniq is not a function nofile_b:3",
[11:31:50] <hansmeier> how can i make the loaded _ function available in the map-reduce scope?
[12:04:35] <jonasliljestrand> hi, how would mongodb handle a single update that sets a boolean to true for (10-500 000) documents
[12:05:15] <jonasliljestrand> im thinking of locks, execution time, better solutions?
[12:34:09] <remonvv> Hi guys. Does anyone know where I can find the specific consequences of disabling the _secondaryThrottle flag in balancing? It says what it stops doing but I'm having trouble translating that to consequences on durability/availability.
[12:51:50] <KekSi> hi -- is there a way to verify a mongodump was successful?
[12:53:09] <KekSi> a box just died during the dump and i've removed the bson file of the collection which had a different size than the rest (and had no .metadata.bson)
[12:53:22] <KekSi> and dumped the rest explicitly (all collections have the same size)
[12:53:51] <KekSi> with mongodump --collection
[13:30:12] <Rickky> Morning
[13:32:07] <Rickky> I'm very curious if this is normal behaviour 2014-12-17T13:23:57.903+0000 [FileAllocator] done allocating datafile /data/mongo1/mongodb/<db>.52, size: 2047MB, took 32.715 secs
[13:32:35] <Rickky> Pre allocating 2 gigs takes like 4 seconds when doing it manually through DD
[13:32:40] <KekSi> yes (although that seems to take very long)
[13:33:00] <Rickky> But mongo takes 30+ seconds, which seem to cause timeouts for clients that are connceted
[13:33:04] <KekSi> does it always take this long?
[13:33:46] <Rickky> KekSi: I'm not sure, this is our first mongo cluster we've set up
[13:33:56] <Rickky> it's not in production yet, we're running tests now to verify the cluster
[13:34:06] <Rickky> but 30+ seconds doesn't seem normal to me
[13:36:30] <KekSi> just checked through some logs and that doesn't seem all that extraordinary -> Thu Mar 13 06:09:00.509 [FileAllocator] done allocating datafile C:\mongodb\data\db\weather_forecast_longterm.5, size: 2047MB, took 69.226 secs
[13:43:04] <seiyria> hey guys, I'm getting a weird Incorrect Arguments from the node mongo driver. I've posted here: https://jira.mongodb.org/browse/NODE-334 but the long and short of it is I have no idea how to debug where this could be coming from, or why it's happening. Anyone have any ideas what I can do? The only stackoverflow post I can find is here: http://stackoverflow.com/questions/21832923/mongoerror-incorrect-arguments but this is pretty unhelpful since I'm not
[13:43:04] <seiyria> sure what DB call is "incorrect"
[14:00:31] <JohnnyMetroi> Hi all. I have a replicaset with 10 members. I have a slave that is reported as being secondary with optime within 3sec of primary. The issue is that it has 6M documents less in it if i do a db.<name>.count() on the primary and secondary. I also managed to find some of the documents that were on the primary and not on the secondary. Anyone interested in helping me debug?
[14:09:48] <cammellos> Hi , I am having some trouble running mongodb, can I ask for help here?
[14:47:01] <winem_> cammellos: sure, just ask your Q
[14:48:52] <Guest44126> winem_, maybe do you know how to fix this problem: http://pastie.org/9781623 ?
[14:51:58] <winem_> looks like you want to do something on a non existing collection
[14:52:39] <winem_> do you want to move a chunk to a new shard?
[15:05:38] <mister_solo> join #akka
[15:23:57] <Guest44126> winem_, yes i want move a chunk i want remove shard
[15:24:09] <Guest44126> but i have still this error on log
[15:24:56] <Guest44126> shard removing process can't be finished
[15:25:21] <Guest44126> winem_, do you know how to fix it?
[15:29:07] <winem_> did you try to move the chunk manually or did you just remove the shard?
[15:29:32] <Guest44126> just remove the shard
[15:31:49] <Guest44126> winem_, here is more information http://pastie.org/9781758
[15:33:29] <Guest44126> i want remove shard0001
[15:42:11] <Guest44126> winem_, is there any way to fix it?
[16:09:44] <Sticky> hay, so I have had an issue of running out of disk space on a replset member, and the log has lots of stuff like "Can't take a write lock while out of disk space" however the optime of the member looked fine and up to date. I have fixed the disk space issue, but can I really be sure those writes were performed?
[16:10:30] <Sticky> seems odd that the optime did not start lagging when when out of disk space
[16:19:47] <agenteo> is there a CLI mongo client with some syntax highlighting?
[16:27:10] <jonasliljestrand> agenteo: have you tested mongohacker?
[16:27:50] <jonasliljestrand> it does not have syntax highlighting but it really helpful
[16:38:46] <agenteo> yeah I found it, but all I’d really like to have is a vim like highlight of my parenthesis
[16:39:06] <agenteo> I found someone suggesting using rlwrap. Has anyone tried that?
[16:51:07] <cheeser> /1
[18:05:40] <Mia> Hey all
[18:05:48] <Mia> I've been following some tutorials, I'm very new to mongo
[18:06:05] <Mia> and I've seen that in tutorials people can remove all itms from a database by .remove()
[18:06:16] <Mia> however when I try that, I get the "needs query" error message.
[18:06:27] <Mia> .remove({}) works okay, but still, wanted to ask
[18:06:43] <Mia> is it a recent change? or am I missing a point
[18:27:22] <agenteo> is there a way to get a list of ids affected by an update (either during or just after)?
[18:31:29] <kali> only with findAndModify (which will update only one doc)
[18:42:17] <vacho> when I run update queries on mongo, it overwrites all current fields..how can I overcome this?
[18:44:54] <cheeser> pastebin your stuff
[18:45:00] <kali> vacho: look for $set
[18:45:54] <vacho> kali: set or upsert?
[18:47:15] <kali> vacho: $set. this is how you alter only specific fields on update. if you don't use it you replace the whole document
[18:47:39] <vacho> kali: ok thanks..also can you please recommend me a good GUI for mac os x?
[18:47:52] <kali> vacho: mongohub
[18:48:31] <vacho> thx kali
[18:51:33] <vacho> this might sounds crazy stupid..but I just intalled mongo on my ubuntu web server and i have no idea what my username/pass is? I can get onto mongo CLI by running command "mongo" and it's not asking me to authenticate
[18:53:39] <ehershey> auth is off initially until you turn it on
[18:54:25] <vacho> ehershey: that explains a lot! :) btw..I tried mongohub and it keept on crashing and didn't work that well. RoboMongo seems more solid.
[18:56:16] <kali> vacho: https://github.com/jeromelebel/MongoHub-Mac this one ? this is weird.
[18:56:37] <kali> vacho: you may want to submit an issue, jerome is very dedicated to the project
[18:59:17] <vacho> kali: yes that one.
[19:00:57] <vacho> kali: http://screencast.com/t/xxGqfX6ifp3 I get those popups and then it crashed the first 3 times I used it..but it didn't crash this time
[19:04:09] <vacho> kali: what do you feel about auto generated ID's? with RDMS I am used to 1,2,3,4 eg auto increment.. are there any shortcomings to having random hashes generated
[19:04:10] <vacho> ?
[19:29:34] <vacho> can someone help out: http://pastie.org/9786858
[19:34:21] <vacho> anyone please? :)
[19:37:33] <vacho> updated version: http://pastie.org/9786880
[19:45:35] <Guest44126> hi, how to fix this problem: http://pastie.org/9781623 ?
[19:52:36] <agenteo> @kali thanks
[20:01:25] <Dinq> silly n00b question: is there a serious performance hit for my mongodb server using swap space?
[20:01:49] <Dinq> 16gb physical ram, all used of course (mem mapping), and looking like 550MB of swap being used...
[20:02:40] <Dinq> 32core server with load avg of only 5-6.00, but a few basic queries seem to take longer than I hope
[20:07:17] <krisfremen> Dinq: is it mongodb that's swapping?
[20:08:15] <Dinq> good question. the server itself is not doing anything else specifically...
[20:08:28] <Dinq> let me see how to find out what's swapping...
[20:12:10] <Dinq> looks like mongod is using 240MB of the 550MB swap...
[20:17:57] <krisfremen> Dinq: I've sometimes seen my mongodb swap from time to time even when there's lots of ram available
[20:18:13] <agenteo> is there a way to run an order by FIELD in mongodb? ie. order by array field cars with this weight [‘bmw’, ‘audi’, ‘fiat’, ‘ford’]
[20:18:24] <krisfremen> and that swap usage stays the same until mongodb is restarted, so I'm not quite sure
[21:44:53] <Mmike> Hi, lads. Just a confirmatinon - I have 3node replicaset. One node dies. I'm now running on degraded 2node replica set. If another node fails (regardles of which one), my remaining node is read-only?
[21:47:52] <joannac> yes
[21:53:08] <Naeblis> Is the order of query result that has sort by creationtime + limit guaranteed? Because I'm getting weird results.
[21:53:59] <joannac> Naeblis: yes. define "weird results"
[21:55:11] <Mmike> joannac, if I have degraded cluster (2 units, pri and sec), and I do 'rs.stepDown()' on pri, I end up with two secondaries. How do I recover from that?
[21:55:50] <Naeblis> I populate an embedded field and provide { sort: { creationtime: -1 }, limit: 3 } as options. I later filter items out based on the values in this embedded field collection. Only, the filter's result is non-deterministic when I keep limit option, and works fine if I comment limit out.
[21:56:28] <joannac> Mmike: be more patient? or look at the logs to figure out why no primary is elected
[21:56:48] <Mmike> joannac, ack! :) patience is a virtue :)
[21:57:05] <Mmike> I got new primary, was just... hm, dunno the word :)
[21:57:07] <Mmike> joannac, thnx
[21:58:33] <joannac> Naeblis: so what you're saying is, if you don't have any new writes, and you run the same query with .sort({ creationtime: -1 }).limit(3), you get different results?
[21:58:58] <Naeblis> Yes.
[21:59:15] <Naeblis> no writes
[21:59:26] <joannac> hrm.
[21:59:45] <joannac> demonstration?
[22:00:00] <Naeblis> lemme paste some code
[22:00:23] <joannac> db.blah.find({}, {creationtime:1}).sort({ creationtime: -1 }).limit(3).toArray()
[22:02:34] <Naeblis> hm the query you provided works fine joannac.
[22:02:42] <vacho> hoping someone can take a look at this and tell me why this does not update? http://pastie.org/9786880
[22:02:59] <Naeblis> I am however, providing these options to a .populate() call (mongoose). Maybe that makes a difference...
[22:03:09] <Mmike> joannac, a quick one: let's say that I ended up with only one mongodb unit in my replicaset, which is now 'secondary'. The rest of the units are disintegrated. I'm adding new node, but I can't join it to secondary. How do I convert existing secondary to become primary?
[22:05:03] <joannac> Mmike: um, what?
[22:05:18] <joannac> Mmike: you add another node, making 4. 2/4 is not a majority. No primary.
[22:05:46] <Mmike> Hm, sorry, was clumsy in explaining. Trying again:
[22:05:52] <joannac> Mmike: do you want to get rid of the other 2 nodes?
[22:06:02] <modulus^> joannac: what is priority/votecount for arbiter only?
[22:06:03] <joannac> if you, reconfig them out with force:true
[22:06:32] <joannac> modulus^: what? priority for an arbiter is irrelevant, it can't be primary. votecount is 1
[22:06:45] <modulus^> joannac: what is the setting?
[22:07:02] <joannac> Naeblis: hrm not sure.
[22:07:04] <Mmike> I have healthy 3unit replicaset. Then I experience severe hw failure on two nodes, and I'm left with only one, which is 'secondary'. As I can't bring any of the dead nodes up, I need to bring new hardware. How do I convert the remaining secondary to become primary so that I can add fresh nodes to it?
[22:07:27] <Naeblis> joannac: some code, what I'm trying to do: https://bpaste.net/show/e92d7b37e22d
[22:07:33] <joannac> Mmike: what setting? there is no setting. arbiters in the config just have "arbiterOnly: true", you don't need to set priority or votes
[22:07:56] <joannac> Mmike: reconfig and remove the 2 dead nodes. with force:true
[22:08:04] <joannac> crap
[22:08:08] <modulus^> joannac: is that a 2.6.x config setting? i've never seen arbiterOnly for 2.4
[22:08:49] <joannac> modulus^: http://docs.mongodb.org/v2.4/reference/replica-configuration/#example-configuration-document
[22:08:53] <joannac> third option down
[22:09:57] <modulus^> no wonder i never saw that
[22:10:04] <modulus^> options are not explicitly shown
[22:11:15] <modulus^> joannac: is there any difference between using arbiterOnly:1 and settings priority:0, hidden: true ?
[22:11:51] <modulus^> besides the actual data being replicated?
[22:12:54] <SpNg> any suggestions on a hosted solution for mongodb?
[22:14:35] <jsonperl> SpNg try compose
[22:15:06] <jsonperl> I'm trying to index a collection so I can grab docs that have elements in an array
[22:15:12] <jsonperl> db.players.find({zone_id : ObjectId('535ec392c6ff45fb0ba8b67c'), 'messages.0' : { $exists : true } }).hint({zone_id : 1, 'messages.0' : 1}).explain()
[22:15:42] <jsonperl> no matter what I do, it winds up scanning way too many items, even though I've indexed on zone_id and messages.0
[22:15:46] <jsonperl> any ideas?
[22:16:14] <jsonperl> "n" : 28, "nscannedObjects" : 43390
[22:18:11] <joannac> modulus^: no replication
[22:18:47] <modulus^> joannac: arbiter does not replicate?
[22:18:50] <joannac> jsonperl: tsk tsk. we had a discussion similar to this a couple days ago. look in the logs
[22:18:54] <joannac> modulus^: correct
[22:19:07] <modulus^> so hidden secondary is a replicating arbiter
[22:19:17] <modulus^> i'm just pure logic here
[22:19:21] <modulus^> i have no clue the internals
[22:21:32] <jsonperl> @joannac can you point me in the right direction?
[22:24:16] <joannac> modulus^: not the right way to think about it, really. but if it helps you, sure.
[22:26:54] <Ymesio> Hi all, I have a file which I load like this: mongo file.js
[22:27:02] <modulus^> joannac: is it because my thought is too abstract?
[22:27:14] <Ymesio> How do I get json file contents which is in the same folder?
[22:27:17] <jsonperl> allright, sweet thanks
[22:36:30] <vacho> why am I getting a syntax error here? http://pastie.org/9787133
[22:37:22] <vacho> kali: are you around?
[22:39:05] <jsonperl> does anyone know the proper way to index for an empty OR non-existent array
[22:39:41] <vacho> not that much user activity here.
[22:39:43] <modulus^> jsonperl: that sounds superfluous
[22:40:09] <jsonperl> I attempted to index with the 0th element, ensureIndex( {other_thing : 1, 'item.0' : 1} )
[22:41:06] <Ymesio> Anybody, can I read a json file from a mongo script?
[22:41:17] <jsonperl> but 'item.0' : { $exists : true }, while it does work, does not hit the index
[22:41:40] <modulus^> jsonperl: what happens if you remove the $exists: true ??
[22:42:17] <jsonperl> and just skip the field entirely?
[22:42:53] <modulus^> i'm curious what the nreturned is if you exclude $exists: true
[22:43:21] <jsonperl> 82869
[22:43:23] <vacho> would love some help on my paste: http://pastie.org/9787133
[22:43:30] <jsonperl> vs 41
[22:44:00] <modulus^> have you tried excluding other fields?
[22:44:21] <modulus^> like otherfield: 0
[22:44:38] <modulus^> i'm no json expert
[22:44:43] <modulus^> or mongo for that matter
[22:44:47] <jsonperl> ha
[22:45:20] <modulus^> but i know a thing or two about data
[22:45:29] <modulus^> seems to me you could narrow your search down
[22:45:58] <jsonperl> there's only 41 items
[22:46:00] <jsonperl> that's pretty narrow
[22:46:13] <jsonperl> my issue isn't filtering... it's that it doesn't use the index
[22:46:17] <modulus^> oh you want to reduce scanned?
[22:46:20] <jsonperl> right
[22:46:54] <jsonperl> like what SHOULD my index be for that query
[22:47:04] <modulus^> are you sure you indexed
[22:47:05] <modulus^> ?
[22:47:06] <jsonperl> ensureIndex( {other_thing : 1, 'item.0' : 1} seemed correct to me
[22:47:07] <jsonperl> yes
[22:47:35] <modulus^> show me db.collection.getIndexes()
[22:48:07] <jsonperl> http://pastie.org/9787150
[22:48:24] <modulus^> jsonperl: what's the nscanned if you reverse the item.0:1 other_thing:1 ?
[22:51:48] <jsonperl> modulus^ same nscanned
[22:51:55] <modulus^> your keys look good to me
[22:52:12] <modulus^> something odd though
[22:52:24] <modulus^> you don't have a key: { "_id": 1 }
[22:52:28] <modulus^> why is that?
[22:52:36] <jsonperl> on the index?
[22:52:53] <modulus^> on for your deepworld.players collection
[22:53:06] <modulus^> all my collections have a "_id" key
[22:53:08] <jsonperl> i didn't give you the whole getIndex() dump
[22:53:13] <modulus^> oh
[22:53:17] <jsonperl> just the bits we care about here
[22:53:19] <modulus^> ok
[22:54:00] <modulus^> try uniqe: true
[22:54:06] <modulus^> for your compound key
[22:54:06] <jsonperl> ha
[22:54:14] <modulus^> that'll do something
[22:54:18] <jsonperl> thanks modulus^ i think we're barking up the wrong tree
[22:55:14] <modulus^> my compound index keys are unique
[22:57:22] <modulus^> jsonperl: all the docs say sharding is good
[22:57:34] <jsonperl> sound good, I'll go shard
[22:57:48] <jsonperl> having fun?
[22:58:31] <modulus^> of course i have a compound index with 5 keys
[22:59:22] <modulus^> so my find()'s are alot more narrowed down
[23:16:06] <vacho> I want to store data about TV Shows and their seasons, episodes and talents....... e.g. TV Show -> Season -> Episode -> Talent ......... I feel nesting the data would be the obvious choice? what do you guys think?
[23:22:26] <slajax> Has anyone had success with mongoose + wired tiger? AFAICT, mongoose doesn't support the new auth hash yet. Can anyone confirm?
[23:23:52] <mod^> what's the default if not wired tiger?
[23:23:56] <mod^> plain' ol' b-tree?
[23:24:17] <slajax> I think in 2.6 auth was generated with MONGO-CR hashes
[23:24:34] <slajax> I forget what I got as a default when switching storage engines, but it was SHA something or other.
[23:24:55] <slajax> Mongoose couldn't figure it out and I couldn't grep a way to force wired tiger to use MONGO-CR
[23:25:07] <slajax> happy to be wrong though if anyone has any info on it.
[23:26:19] <slajax> I'm talking about db.addUser explicitly and the auth hash it generates, not the entire storageEngine itself btw
[23:27:07] <slajax> err db.createUser rather
[23:31:07] <joannac> sokr: i think expecting mongoose to support a version that isn't even GA yet is asking a bit much. You might get a better answer in #mongoosejs though
[23:31:17] <sokr> O_o
[23:31:23] <joannac> ops, not you sokr
[23:31:27] <sokr> :)
[23:31:28] <joannac> that was for slajax
[23:31:31] <joannac> >.<
[23:31:48] <mod^> will wired tiger support be GA before mongoworld?
[23:32:34] <mod^> cause i'd like to hear more about wired tiger after it's GA
[23:37:06] <slajax> @joannac - I'm totally fine if they don't support it yet, I get that.
[23:37:28] <slajax> I'm just trying to confirm I'm seeing things correctly.
[23:37:57] <slajax> Sounds like I am, so thank you!
[23:49:48] <vacho> someone care to take a stab at this? http://stackoverflow.com/questions/27536820/whats-a-good-data-model-in-mongo-for-my-data