PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 24th of March, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:04:08] <Kota> hey i was in kind of late this morning and no one was around, but i'm having an issue with a duplicate index on a collection. http://puu.sh/gN3tr/7ff013951f.png i've got two _id_ indexes ascending on _id and I can't get rid of either of them
[01:10:42] <Boomtime> Kota: can you check that via the mongo shell please?
[01:33:11] <Kota> Boomtime: confirmed via shell http://puu.sh/gN5xP/3a971f544c.png
[01:33:30] <Kota> field order variance, but otherwise identical
[01:43:15] <Kota> okay that's just messed up... i attempted to make another index, with a different name and different field and it duplicated my _id_ index yet again
[01:43:43] <Kota> i give up. indexes just hate me today http://puu.sh/gN6ik/b828469d0b.png
[01:53:50] <Kota> { "nIndexesWas" : 3, "ok" : 0, "errmsg" : "may not delete _id index" }
[01:53:52] <Kota> that's hitler
[01:59:35] <Boomtime> Kota: can you do a db.version()
[01:59:56] <Kota> 2.4.10
[02:01:16] <Kota> Boomtime: ^
[02:02:33] <Boomtime> try db.system.indexes.find()
[02:02:48] <Boomtime> i assume before you were doing db.showIndexes()
[02:06:25] <Kota> { "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "yhr.chat_pms" } { "v" : 1, "name" : "_id_", "key" : { "_id" : 1 }, "ns" : "yhr.chat_pms" } { "v" : 1, "name" : "_id_", "key" : { "_id" : 1 }, "ns" : "yhr.chat_pms" }
[02:06:53] <Kota> snipped for brevity, since there are around 60 indexes
[02:08:20] <Kota> well, 75 to be exact lol
[02:12:06] <Boomtime> Kota: good now please try (copy this exactly): db.system.indexes.find({name:"_id_",ns:"yhr.chat_pms"})
[02:12:31] <Boomtime> I know this looks pointless, please try it anyway
[02:13:29] <Kota> Boomtime: http://hastebin.com/atitavakar.json
[02:14:00] <Boomtime> that is special..
[02:14:15] <Kota> oh?
[02:15:51] <Kota> i'm inclined to just move it to another collection and drop this one
[02:16:14] <Kota> ive tried all manner of dropping, rebuilding to no avail
[02:17:13] <Boomtime> would you be amicable in supplying a copy of the bson metadata file after a dump?
[02:18:20] <joannac> yup, just mongodump the database, and supply the system.indexes.bson file
[02:18:45] <Kota> please hold
[02:18:46] <joannac> it will only show indexes, no data
[02:18:58] <joannac> index definitions*
[02:19:10] <Kota> i figured, unless you want the 1.9 million documents that are in there
[02:19:25] <joannac> no thanks :)
[02:23:23] <Kota> joannac, Boomtime: http://hastebin.com/raw/erujusocay
[02:27:58] <joannac> Kota: you should get a bson file, that you need to upload somewhere (or I can PM you an email address)
[02:29:00] <Kota> the actual data is a bson file but the metadata is json
[02:29:04] <Kota> that was the content of
[02:29:30] <Kota> Mon Mar 23 19:21:42.132 DATABASE: yhr to dump/yhr Mon Mar 23 19:21:42.133 yhr.chat_pms to dump/yhr/chat_pms.bson Mon Mar 23 19:21:44.302 2064704 objects Mon Mar 23 19:21:44.302 Metadata for yhr.chat_pms to dump/yhr/chat_pms.metadata.json
[02:29:57] <joannac> you don't have a system.indexes.bson file?
[02:30:16] <Kota> oh i only exported that collection. one moment
[02:35:10] <Kota> joannac: i do not have an upload site handy, however I can send via email
[02:35:55] <joannac> sure, let me PM you my email
[02:48:20] <CyanoIntrant> I'm pretty new to NoSQL, anyone have any comments on how https://www.reddit.com/r/SQL/comments/302g9z/ would work in it?
[04:03:39] <Boomtime> Kota: FYI, all three _id indexes are missing the unique flag - that collection is trouble - you should stop using it asap
[04:08:09] <Kota> "get outta my sight" lol
[04:08:24] <Kota> anywho.
[04:09:20] <joannac> Kota: if you could reproduce, that would be great; i haven't been able to reproduce so far
[04:09:41] <Kota> as i said, after i restarted mongod i wasn't able to make any more duplicates
[04:09:47] <joannac> hrm
[04:10:20] <Kota> looks like we have a heisenbug
[04:30:57] <a|3x> i get an error 'Error parsing INI config file: unknown option sslPEMKeyFile'
[04:31:40] <Boomtime> what version?
[04:31:53] <a|3x> 2.6.3
[04:32:01] <Boomtime> right, so SSL is enterprise only
[04:32:42] <Boomtime> if you have a support contract you should have been given a way to download the enterprise version
[04:33:29] <a|3x> hm, i don't believe i have a support contact
[04:34:09] <Boomtime> well, you can upgrade to 3.0 instead
[04:34:16] <joannac> or you can try and compile in ssl yourself
[04:34:48] <a|3x> 3.0 comes with ssl in the main repository?
[04:35:15] <Boomtime> i think so, i could be wrong
[04:36:11] <Boomtime> http://docs.mongodb.org/manual/release-notes/3.0/#distributions
[04:36:31] <Boomtime> pretty sure that means 3.0 comes with SSL out of the box
[04:36:53] <joannac> if you go to http://www.mongodb.org/downloads and pick your distro, it'll tell you whether it does or not
[04:43:47] <a|3x> ok, thanks for your help
[04:51:15] <Kota> joannac: what is the permission for using copyTo on 2.4?
[04:51:42] <Kota> the 3.0 docs say to grant anyAction on anyResource, but i don't know if 2.4 supports grants on stuff
[04:53:14] <Kota> the 2.4 docs don't say anything about perms
[04:54:23] <joannac> Kota: http://docs.mongodb.org/v2.4/reference/method/db.collection.copyTo/
[04:54:29] <joannac> Kota: http://docs.mongodb.org/v2.4/reference/user-privileges/
[04:55:15] <Kota> yes i am at the first one, and the second one has no listings for copyTo
[04:55:16] <joannac> copyTo uses eval. eval requires a bunch of stuff as documented in the second link
[04:55:35] <Kota> oic
[04:57:40] <Kota> this will be so much easier when i update to 3.0 @_@
[04:58:09] <Kota> could have sworn the daemon was updated when we moved datacenters
[04:58:12] <Kota> meh
[04:58:44] <joannac> erm, for a start, robomongo doesn't support 3.0 yet
[04:58:48] <joannac> (afaik)
[04:59:19] <Kota> i can live with the shell lool
[04:59:22] <joannac> in general, don't upgrade blindly
[04:59:35] <Kota> i assure this will not be taken lightly
[04:59:46] <Kota> ill probably work on it with my two sysadmins all night
[05:00:51] <Kota> "i don't always test new updates, but when I do it's done in production"
[05:00:55] <Kota> hehe
[05:03:15] <Kota> inb4 everyone get's the idea i'm a nutter, i'm only joking.
[05:16:24] <preaction> inb4 OP can't inb4
[05:17:03] <Kota> inb4 my inb4 was impeccable
[05:36:30] <GothAlice> Kota: We're all nutters here. Care for some tea? CHANGE PLACES!
[05:36:47] <Kota> :D
[05:37:30] <GothAlice> … or would that be … collection? *rimshot*
[05:40:16] <Kota> well. i guess querying and bulk inserting in python is faster than copyTo
[05:40:23] <GothAlice> Kota: Can be.
[05:40:26] <GothAlice> Let me gist something for you.
[05:40:30] <Kota> no, like WAY faster
[05:41:06] <Kota> i did a test copyTo but stopped it mid copy after about a minute, it had copied half of my collection
[05:41:21] <Kota> i just did a bulk insert with python and the entire database copied in around 30 seconds
[05:41:49] <Kota> 2065649 documents according to count()
[05:41:58] <GothAlice> What's your write concern level?
[05:42:08] <Kota> default
[05:42:17] <Kota> python also didn't block the database ;)
[05:42:25] <GothAlice> Weeeel.
[05:42:43] <GothAlice> It would have for short bursts, specifically, if you truly were using bulk operations, at the bulk operation size limit.
[05:43:59] <GothAlice> Kota: https://gist.github.com/amcgregor/ec1f041ea33b1b4d7ea9
[05:44:59] <Kota> GothAlice: https://gist.github.com/dabaer/e21f44ea7a5152525fd9 was what i used
[05:45:20] <GothAlice> Using futures, have some easy multiprocess parallelization. What isn't covered by the timings at the top is the resource utilization. Single-process was using ~33% of one core, ~11% on the mongod core. Four-process was using ~80% of each of four cores, 60% of one mongod core. (Much better utilization.)
[05:45:44] <GothAlice> Hmm, that'd be why you're not blocking at all. XD Each operation is quick, and you're not doing any batching at all.
[05:46:23] <GothAlice> (I really ought to be doing proper batching, but, eh… I'm lazy, and this is a rarely-run benchmark I use to test WiredTiger.)
[05:46:43] <Kota> oh dear.
[05:47:04] <Kota> my indexes are even more broken
[05:47:33] <Kota> does renaming a collection move its indexes...?
[05:47:46] <GothAlice> Yes.
[05:48:34] <Kota> i copied the data to the new collection, renamed the broken index collection, renamed the new collection to what the broken was before, and now the renamed broken collection has two duplicate indexes for _id and the new collection has all of the broken collections old indexes...
[05:48:34] <GothAlice> If the process is aborted part-way through for any reason, though, you'll understandably need to repair the indexes.
[05:49:02] <Kota> no i deleted the collection that resulted from copyTo()
[05:49:05] <GothAlice> Sounds like something somewhere is adding indexes automatically. Any ensure_index calls as import-time side effects?
[05:49:23] <Kota> nope
[05:49:26] <Kota> you've seen the code
[05:49:29] <Kota> that's all there is
[05:49:42] <GothAlice> Ah, so you weren't spinning up a higher-level application to examine the result, then.
[05:49:42] <Kota> i brought the application down before copying
[05:50:05] <Kota> http://puu.sh/gNkFf/21b83c8ca3.png
[05:50:12] <Kota> well the broken collection is slightly less broken i guess
[05:50:18] <Kota> \o.o/
[05:50:35] <GothAlice> …
[05:50:37] <GothAlice> wau.
[05:50:52] <Kota> i already sent joannac the bson of my indexes to confirm that they are indeed duplcates...
[05:51:24] <GothAlice> That is a curious problem indeed. ^_^
[05:51:53] <Kota> joannac: none of my copy operations were done through mongo, so its something up with the daemon? or possibly the database itself
[05:51:58] <Kota> robomongo rather
[05:52:30] <GothAlice> Kota: Could you repeat the drop/re-copy and only check the result of db.collection.getIndexes() from the mongo shell?
[05:52:47] <Kota> yer one moment
[05:53:16] <GothAlice> (Heh, checking some of mine to make sure duplicate _id_ isn't actually normal, I notice that none of mine are named anything useful: t_p_1_t_o_1__id_1. wut.)
[05:53:44] <Kota> i'm going to close mongo and just use shell for the entire process
[05:53:49] <Kota> robomongo even
[05:54:00] <GothAlice> Autocorrect? XP
[05:54:06] <Kota> no, brain
[05:54:21] <GothAlice> ^_^ First-party autocorrect. Even more diabolical.
[05:59:28] <Kota> alright. http://hastebin.com/ixiwajahah.json - copied the collection via python. mid-process index snapshot
[06:00:21] <GothAlice> Yeah, the Python you've got certainly wouldn't be adding that index.
[06:00:28] <GothAlice> … silly GUI tools.
[06:02:14] <Kota> http://hastebin.com/isijuyuqif.json - broken collection pre-rename snapshot
[06:02:43] <GothAlice> I don't even. XP
[06:02:47] <Kota> http://hastebin.com/atihuyexad.json - post rename snapshot...
[06:03:09] <GothAlice> You lost one just renaming the collection?
[06:03:26] <winem_> hi together. I guess some of you will already have passed the m202 course (advanced deployments and operations). can you tell me if the MMS is only part of the stuff for the first week or in all weeks?
[06:03:27] <Kota> yup
[06:03:29] <Kota> ALSO
[06:04:03] <GothAlice> Kota: MongoDB version?
[06:04:14] <Kota> i thought this was just a normal okay message, but renaming the new collection did not produce it, but that copy on the broken collection returned this error: http://hastebin.com/ucamevudef
[06:04:18] <Kota> 2.4.10
[06:04:23] <GothAlice> T_T
[06:04:32] <Kota> the copy was successful however
[06:04:45] <GothAlice> Are you absolutely, like, hashing the BSON documents, certain of that?
[06:05:04] <Kota> ?
[06:05:20] <Kota> the data is all intact
[06:05:25] <GothAlice> Number of records alone isn't sufficient to say, this data is the same as this other data. ;)
[06:06:05] <Kota> i haven't verified every record obviously but at first glance the data is what should be there
[06:06:08] <GothAlice> That type of namespace error is a "bad sign" and warrants more scrutiny, and/or a --repair.
[06:07:06] <Kota> what should my next course of action be?
[06:07:47] <winem_> GothAlice, please tell me more about hashing the BSON documents. I would expect issues with timestamps or die id objects. I will wait until Kota is happy and knows his issues fixed :)
[06:08:12] <Boomtime> Kota: what version did you say this was?
[06:08:19] <Kota> 2.4.10
[06:08:21] <GothAlice> MongoDB out of the box doesn't validate the wholesale integrity of every BSON value it's handed. (Some things, such as the well-fromedness of records to insert it takes on faith unless an option is given to explicitly check.) This means it's fully possible to do some _really_ crazy things with your BSON documents, either on purpose, or accidentally.
[06:08:21] <Boomtime> oh, 2.4.10 nm
[06:08:32] <Boomtime> yep, thanks, i thought 2.4.6 for some reason
[06:09:13] <GothAlice> For example, one could construct a single document with multiple _id values. (Due to the way the document is stored, repeated keys are technically allowable.)
[06:09:13] <Kota> is there a validator i can run it through?
[06:10:21] <GothAlice> Ah! Never mind, it _was_ 2.4 where they fixed that.
[06:10:30] <GothAlice> Thought it was 2.6. (They changed the default, apparently in 2.4, to check-by-default.)
[06:10:39] <GothAlice> Ref: http://docs.mongodb.org/v2.4/reference/configuration-options/#objcheck
[06:10:41] <Kota> well for now i'm going to update the application component that uses that table, since the new collection is fine, only when i rename it to that old name does it screw up
[06:11:02] <Kota> somethings sticky with the indexes
[06:12:43] <Boomtime> indeed, that is what that assertion is checking
[06:12:48] <Boomtime> https://github.com/mongodb/mongo/blob/r2.4.10/src/mongo/db/namespace_details.cpp#L924
[06:13:08] <GothAlice> Huzzah. I assume the failing assertion means a record wasn't returned?
[06:13:21] <GothAlice> Oh, never mind, that's just on rename.
[06:13:30] <Boomtime> somehow, despite the collection not being there, the index assertion of 'not present in destination' is failing.. meaning it trips over something that shouldn't exist
[06:13:58] <Boomtime> when you delete a collection, apparently the _id index is not being deleted with it
[06:14:10] <Boomtime> this is most creepy
[06:14:51] <GothAlice> If there's multiple and an unhanded "multiple documents returned" (or equivalent) exception in the loop (by name?) to clear the indexes…? Fair number of assumption in that hypothesis.
[06:17:52] <Kota> so how do i get rid of 'em?
[06:18:01] <Kota> this is seriously disrupting mym ojo
[06:18:25] <GothAlice> Kota: Would now be an opportune time to upgrade to a (supported) version of the DB? :D
[06:18:30] <Boomtime> i don't think we know what will work at this point
[06:18:45] <GothAlice> (I.e. attempt to export, then import, sans indexes, into a newer cluster. Then re-create the "good" indexes.)
[06:18:47] <Boomtime> you are in an impossible state, i would certainly advise dumping out your data
[06:18:53] <GothAlice> Er, dump/restore.
[06:18:54] <Kota> i was attempting to fix that issue *before* upgrading lol
[06:18:59] <Tusker> heya guys, was wondering if pymongo 2.7.2 can authenticate on mongo 3.0.1 ?
[06:19:13] <Tusker> so far I am getting auth failed
[06:19:37] <GothAlice> Tusker: http://docs.mongodb.org/manual/release-notes/3.0-compatibility/#driver-compatibility-changes
[06:19:52] <GothAlice> Tusker: Also, see the rest of the page, notably the authentication section. (It's all useful to know, though.)
[06:20:01] <Kota> what would you advise so that i don't risk importing the corruped indexes
[06:20:10] <GothAlice> Kota: Don't dump the indexes. :)
[06:20:16] <Tusker> GothAlice: thanks, much appreciated
[06:20:39] <Kota> woo this is gonna be fun.
[06:20:49] <Kota> did i mention my database has 75b indexes?
[06:20:57] <GothAlice> wat
[06:21:06] <Kota> 75 even
[06:21:11] <GothAlice> Phew.
[06:21:20] <GothAlice> Okay, thought you meant 'b' with a 'billion'. XP
[06:21:32] <Kota> lol
[06:21:36] <GothAlice> (That would have required some rather interesting changes to mongod to achieve. ;)
[06:22:16] <Kota> i was gonna say
[06:29:08] <Tusker> GothAlice: OK, great, upgrading to 2.8 fixed it
[06:43:24] <Kota> so, in preparing for mongodump, how should i go about excluding the db indexes? system, System, or system.indexes?
[07:12:18] <girb1> Hi .. is there any way I can restrict IP based access to mongo router without IP tables
[07:12:18] <girb1> please help
[07:50:59] <joannac> girb1: bindIP option?
[07:54:04] <girb1> joannac: it won't work .. need some set of IP to access
[07:54:10] <girb1> I got this https://github.com/bakks/mongo-proxy
[07:54:21] <girb1> but its buggy .. need fix it
[08:02:57] <ZorgHCS> girb1: This is exactly why iptables exist, I don't understand why that's not an option?
[08:06:13] <girb1> ZorgHCS: I need t give only read access for a mongo router , all writes should be forbidden even though I have implemented user auth
[08:06:28] <girb1> same in what we do with postgres
[11:52:27] <lxsameer> Is there any work around to store version numbers as keys in the mongodb Hash ?
[11:53:08] <Derick> add an "a" in front of them
[11:53:23] <Derick> top level fields really need to be strings
[11:53:35] <Derick> so you need to make it a string
[11:53:44] <Derick> but then, "1.4.2" is a string too
[11:54:38] <lxsameer> Derick: but it seems dot are not allowed in key name
[11:54:52] <Derick> oh, yes, that's true
[11:55:06] <Derick> then again, you are using a *value* as a &key& there - which is not a good thing to do
[11:55:11] <Derick> it's better to store it like:
[11:55:20] <Derick> { "version": "1.4.2" }
[11:55:30] <Derick> or, if you have other properties:
[11:55:36] <Derick> { { "version" :
[11:55:53] <Derick> { { "version" : "1.4.2", meta: { "field1" : "foo" } }
[11:55:58] <Derick> { "version" : "1.4.2", meta: { "field1" : "foo" } }
[11:56:07] <Derick> (sorry, got the brackets wrong first time)
[11:56:18] <lxsameer> Derick: that won't help my case, which one is faster to query , hashes or embeded documents ?
[11:56:34] <Derick> lxsameer: you need to show your structure a bit better
[11:56:50] <Derick> a hash is an embedded document though, there is no difference...
[11:57:04] <lxsameer> Derick: Ah thanks my friend
[11:59:28] <Stiffler> hello, could somebody help me to push array of tags to mongodb?
[11:59:31] <Stiffler> http://pastebin.com/JhZpyniM
[11:59:36] <Stiffler> this doesnt work
[11:59:46] <Stiffler> req.body.tag is array
[11:59:50] <Stiffler> and isnt empty
[12:03:32] <lxsameer> Derick: can I create indexes for embedded docs ?
[12:07:35] <StephenLynx> Stiffler
[12:07:41] <StephenLynx> remove that part "values"
[12:08:20] <StephenLynx> and you may have to put body.body.tags as '$req.body.tags'
[12:08:39] <StephenLynx> neverming about that last part
[12:09:27] <Stiffler> i have removed
[12:09:39] <Stiffler> 'tags' : {$pushAll : '$req.body.tags'},
[12:09:40] <Derick> lxsameer: for embedded keys, yes
[12:09:43] <Stiffler> still doesnt work
[12:09:50] <lxsameer> Derick: thanks
[12:09:57] <StephenLynx> take that the '$'
[12:10:01] <StephenLynx> take back*
[12:10:04] <StephenLynx> I was wrong about it
[12:10:16] <StephenLynx> you got an actual object called req, dont you?
[12:10:46] <StephenLynx> and you may need the $each
[12:13:12] <Stiffler> StephenLynx: yes I have got
[12:13:25] <Stiffler> ive checked it by console.log and its correct
[12:13:35] <Stiffler> its an array and its contain values
[12:13:36] <StephenLynx> if you use '$' you are saying that it is a field in the document you are working with
[12:13:40] <StephenLynx> so in your case, you don't use it.
[12:14:09] <Stiffler> hmm.. so how does this line 'tags' : {$pushAll : req.body.tag}, should look like?
[12:15:19] <StephenLynx> {$push:{values:{$each:req.body.tags}}}
[12:15:22] <StephenLynx> http://docs.mongodb.org/manual/reference/operator/update/push/#example-push-each
[12:15:30] <StephenLynx> are you updating, right?
[12:15:43] <StephenLynx> if you are inserting, just use values:req.body.tags
[12:16:41] <Stiffler> im inserting
[12:17:02] <Stiffler> so 'tags' : {$push:{values:req.body.tags}}, ?
[12:18:07] <Stiffler> still does not work
[12:18:58] <Stiffler> both versions doesnt work
[12:19:02] <Stiffler> with each and without
[12:19:53] <Stiffler> .constructor is array
[12:20:03] <Stiffler> I dont know whats going on
[12:24:07] <joannac> Stiffler: pastebin the entire line you have
[12:25:50] <joannac> Stiffler: like StephenLynx said, if you're doing an insert you don'
[12:25:53] <joannac> t need $push
[12:28:30] <Stiffler> hmm... let me see
[12:29:46] <Stiffler> http://pastebin.com/gR5ECHX2
[12:30:01] <Stiffler> first console.log says Array, second shows array with values
[12:30:06] <Stiffler> but still doesnt work
[12:34:11] <StephenLynx> read my last line.
[12:34:19] <StephenLynx> Stiffler
[12:34:26] <StephenLynx> and compare to what you just pasted
[12:35:50] <Stiffler> 'tags' : {$push:{values:req.body.tags}}, like this?
[12:40:04] <Stiffler> heh
[12:40:10] <Stiffler> it was too simple ;/
[12:40:22] <Stiffler> thanks a lot for brainstorm :)
[13:22:59] <bbclover> hey guys, what gui tool do you recommend?
[13:59:15] <StephenLynx> none bbclover
[13:59:37] <StephenLynx> unlike relational db's, you don't have to keep track of much stuff in mongo.
[13:59:37] <bbclover> everything is from the console?
[13:59:46] <StephenLynx> if anything
[13:59:58] <StephenLynx> adding pretty() at the end of the query should provide enough visual aid.
[14:12:08] <ggoodman> Just had my production 3.0.1 primary crash with an OOM error. Anyone around that can help me diagnose the root cause and help me prevent this from happening again?
[14:35:02] <latestbot> I have referenced an id in an array in another document, how can I remove it in mongodb?
[14:35:38] <cheeser> http://docs.mongodb.org/manual/reference/operator/update/unset/
[14:41:24] <latestbot> Unset seems to remove the field completely
[14:42:37] <latestbot> Suppose I have a field like this in a document, “tags” : [ "550afb5971634aa331d010f3", "550afb6071634aa331d010f4", "550afb6a71634aa331d010f5" ], I just want to remove the first item say. What would I use?
[14:43:33] <Derick> '$unset' : { 'tags.0' : true } — ought to work
[14:52:20] <Alittlemurkling> Hello #mongodb, I'm attempting to construct a hierarchy of distinct medical license specialties. Doctors can have multiple licenses, so I store them in an array of subdocuments. It basically looks like this: {"_id": "1170", "licenses": [ { "specialtyCategory": "Internal Medicine", "specialtyName": "Sleep Medicine"}, {"specialtyCategory": "Pediatrics", "specialtyName": "Sleep Medicine"}] }
[14:53:40] <Alittlemurkling> I would like something that gives the same result as the distinct command, but have been unable to reproduce it with $aggregate.
[14:55:14] <latestbot> @Derick, will try this.
[14:55:45] <Alittlemurkling> So far, I have come up with this: db.provider.aggregate({$match: {"licenses.specialtyCategory": "A"}}, {$unwind: "$licenses"}, {$match: {"licenses.specialtyCategory": "A"}}, {$project: {"licenses.specialtyName": 1}})
[14:57:11] <d0x> Any Idea why I get a java.io.IOException: cannot find class com.mongodb.hadoop.mapred.BSONFileInputFormat with Hive on EMR using the Mongo-Hadoop Connector? This Bootstrap script i used: https://s3.eu-central-1.amazonaws.com/christian.emr/bootstrap.sh
[14:58:17] <Alittlemurkling> This is how I tried to use distinct: db.provider.distinct("licenses.specialtyName", {"licenses.specialtyCategory": "Pediatrics"})
[14:59:03] <Alittlemurkling> But that would return both specialtyNames (ah, if they were different. Pardon the simplcity of my example).
[15:08:59] <latestbot> Looks like in my case $pull is much better
[15:09:14] <latestbot> Since I am removing the items by their value
[15:11:09] <Derick> latestbot: yes, but that's not what you asked ;-)
[15:11:22] <latestbot> hehehe, sorry for the confusion
[15:14:15] <StephenLynx> Alittlemurkling I think that mongo might not be the tool you need.
[15:15:36] <StephenLynx> on the first example, what collection is that?
[15:15:39] <Alittlemurkling> StephenLynx. Okay. I will just construct the relationship when I import the data.
[15:15:53] <Alittlemurkling> provider
[15:15:57] <StephenLynx> what is a provider?
[15:16:15] <Alittlemurkling> A doctor or a healthcare organization.
[15:16:53] <StephenLynx> and these specialties, their hierarchy is completely recursive or do they have a limit?
[15:17:48] <Alittlemurkling> The maximum depth is three levels, but I'm working with a client that only has two currently.
[15:18:26] <StephenLynx> in your model, do you have these three levels defined?
[15:19:02] <Alittlemurkling> No, just two right now. specialtyCategory and specialtyName.
[15:19:28] <StephenLynx> is the level of a specialty defined somewhere?
[15:19:43] <Alittlemurkling> No.
[15:19:54] <StephenLynx> so in your model it doesn't have a limit.
[15:19:59] <StephenLynx> only in the final business logic.
[15:20:01] <StephenLynx> right?
[15:20:18] <Alittlemurkling> Right. They're just two fields in the license object.
[15:20:40] <StephenLynx> yeah, this is a highly relational model. Mongo will be a very bad tool for this.
[15:20:58] <StephenLynx> I strongly suggest you to consider using a relational db.
[15:21:35] <Alittlemurkling> Nah. This is a cache for a search engine, and I can map this relation when I parse the data.
[15:22:59] <Alittlemurkling> And a relation of two things is not highly relational.
[15:23:32] <StephenLynx> hm
[15:23:36] <StephenLynx> ok, so its a cache
[15:23:50] <StephenLynx> and the problem is not the relation of two things
[15:23:55] <StephenLynx> but the relation of n things.
[15:24:14] <Alittlemurkling> 0 < n < 3 things.
[15:24:28] <StephenLynx> but your model don't define this limist.
[15:24:53] <Alittlemurkling> I do not dynamically add fields to this model.
[15:25:02] <Alittlemurkling> And the UI will only support three levels.
[15:25:04] <StephenLynx> in your model it could be a chain of 5 billion specialties.
[15:25:10] <Alittlemurkling> No.
[15:25:13] <Alittlemurkling> There is no chain.
[15:25:25] <StephenLynx> don't a specialty has a parent specialty?
[15:25:40] <StephenLynx> and this parent specialty is the same as the specialty and could have a parent of its own?
[15:25:52] <Alittlemurkling> No.
[15:26:42] <StephenLynx> so its two different objects, category and specialty?
[15:28:57] <StephenLynx> then the levels ARE defined.
[15:29:18] <StephenLynx> you told me they weren't :v
[15:29:43] <Alittlemurkling> There is a license document. It looks like this: { "specialtyCategory": "A", "specialtyName": "B", "subSpecialty": "C", "licenseNumber": "1", "licenseState": "CO" }
[15:30:07] <StephenLynx> ok
[15:30:12] <StephenLynx> and what do you need to query?
[15:32:36] <Alittlemurkling> Given a specialtyCategory, I want all of the distinct specialtyNames associated.
[15:33:09] <StephenLynx> It would be better to have a collection for these.
[15:33:26] <Alittlemurkling> But then that would be relational.
[15:33:28] <StephenLynx> a collection for categories, another one for specialties and another one for sub specialties
[15:33:45] <StephenLynx> a little, yes. but you would have to query only once for what you need anyway.
[15:34:01] <pamp> k
[15:34:04] <StephenLynx> using fake relations as long as you don't have to perform multiple queries is not an issue.
[15:34:18] <StephenLynx> and since your relations are n - 1
[15:34:30] <StephenLynx> n - n relations are bad though
[15:34:36] <StephenLynx> in some cases
[15:34:55] <Alittlemurkling> Okay. I guess this just means I will construct the relation at import.
[15:35:27] <StephenLynx> on a second though, any relations can be bad depending on the query.
[15:35:45] <Alittlemurkling> Yeah. Which is why I have been embedding everything.
[15:35:56] <pamp> Hi, I have a compound index {MoName: 1, "P.k": 1}, I can use the index prefix {MoName:1} in a query like db.test.find({MoName:"UtranCell"}).hint({MoName:1})
[15:36:11] <Derick> yup
[15:36:12] <pamp> ?
[15:36:19] <cheeser> what happened when you tried?
[15:36:30] <StephenLynx> yeah, if you would have to perform a join, then its bad.
[15:36:48] <StephenLynx> because then you would have to query for every relation in addition to your original query
[15:38:07] <Alittlemurkling> Indeed. Well, thanks for your help.
[15:38:20] <StephenLynx> np
[15:42:41] <pamp> bad hint
[15:43:35] <pamp> I cant use only the prefix of a compound index?
[16:39:33] <imachuchu> I have an older mongodb instance (~2.0) and am wondering if there is a way to backup/restore it to a more modern version keeping objectid relationships valid?
[16:40:46] <cheeser> why would they be otherwise?
[16:41:10] <cheeser> there's no real relationship as such. just common values that your app correlates.
[16:43:11] <imachuchu> cheeser: the dump format changed in 2.2 so importing from an older dump kindof works, but not really. I'm more wondering if there is a conversion script somewhere I'm missing
[16:43:40] <cheeser> you could just walk your way up through the upgrades...
[16:43:51] <cheeser> you don't need to dump/restore each time.
[16:44:45] <imachuchu> cheeser: so like install 2.1, then 2.2, then 2.3, etc?
[16:45:07] <cheeser> 2.2, 2.4, 2.6, 3.0
[16:47:01] <imachuchu> cheeser: IMO that seems a bit ugly, but I don't know what's actually changed internally so I guess that's what I can do
[16:47:04] <imachuchu> cheeser: thank you
[16:47:30] <cheeser> ugly? not really. it's safe.
[16:47:40] <StephenLynx> what I would do, install a VM with 3.0, migrate directly and fix whatever broke
[16:47:41] <cheeser> each step has upgrade checks and such.
[16:48:01] <StephenLynx> when you get everything that broke you can migrate your server
[17:51:23] <sellout> 3.0 Java driver is writing log messages … any way I can control that?
[17:53:20] <cheeser> configure logging in your app
[17:58:23] <sellout> cheeser: Sorry, I haven’t done much Java – I assumed I would need to get a reference to the Logger to do that.
[18:02:47] <cheeser> you'd configure in the logging config file for whatever logging framework your app uses.
[18:24:27] <god_phantom> love mongo
[18:27:41] <StephenLynx> :3
[18:28:02] <StephenLynx> yeah, its good, but don't aboard the hype train of fanboyism.
[18:28:08] <StephenLynx> board*
[18:28:26] <StephenLynx> it isn't a panacea
[18:38:28] <d4rklit3> hello
[18:39:07] <d4rklit3> I have a mongodatabase on compose.io and another one ij ust made on a dedicated server somewhere. Is it possible to clone the one from compose.io to my new one via the mongoshell on my dedicated host?
[18:40:15] <cheeser> mongodump/restore ?
[18:45:52] <d4rklit3> yeah so i can connect to a remote server and mongodump locally?
[18:46:12] <cheeser> you have --host and --port
[18:46:23] <d4rklit3> ok cool
[18:46:24] <d4rklit3> let me try this
[18:48:54] <elux> hello
[18:50:01] <elux> just wondering if i can safely upgrade from mongodb 2.6.9 to 3.0.1 ?
[18:52:29] <medmr1> only once you've determined which changes affect you and got out ahead of them
[18:53:59] <cheeser> yes, you can.
[20:06:31] <afroradiohead> Hey guys, if I'm reading the document correctly, upsert is a mixture of insert and update right?
[20:06:37] <afroradiohead> documentation*
[20:10:43] <cheeser> yes
[20:10:53] <cheeser> update if it's there, insert otherwise
[20:11:25] <afroradiohead> that makes sense
[20:11:41] <afroradiohead> is it based on unique indexes?
[20:12:47] <cheeser> no
[20:13:08] <cheeser> it's better if you use indexes, yes. but it's not predicated upon their existance
[20:15:00] <afroradiohead> ahh gotchya. it's based on _id?
[20:15:44] <cheeser> no. it's based on a query.
[20:15:51] <afroradiohead> I guess I was wondering if I can do an upsert request like "oh these indexed values are duplicates, let me just update this document"
[20:15:58] <afroradiohead> gotchya
[20:20:49] <afroradiohead> one more question: is it possible to index based on a nested object?
[20:21:05] <afroradiohead> i mean, index a nested object's key?
[20:23:28] <StephenLynx> I believe so, but it will count as an index for the document that contains the nested object.
[20:24:15] <afroradiohead> yeah, that's cool with me
[20:24:22] <afroradiohead> cool thank you
[20:24:54] <afroradiohead> ahh it's so nice to be able to alter schemas so quickly
[20:24:56] <StephenLynx> np
[20:25:05] <StephenLynx> thats because theres no schema :v
[20:25:10] <afroradiohead> heh
[20:25:25] <StephenLynx> I would suggest not using and ODM or anything.
[20:25:35] <StephenLynx> they usually cause more issues than solve.
[20:25:49] <afroradiohead> ODM?
[20:25:55] <StephenLynx> mongoose and the likes
[20:26:41] <afroradiohead> oh definitely not, I've built my own structure. Mapping classes to collections
[20:28:21] <cheeser> ODMs are fine.
[20:28:29] <afroradiohead> working very well. But yeah that rest of the info was icing on the cake
[20:29:35] <btwotch1> Hi, I am trying to retrieve an object from the mongodb in go via a nested property: http://pastebin.com/tcn4tvtG - does anybody have some helpful comments? thx in advance
[20:30:51] <StephenLynx> use dot notation.
[20:31:11] <StephenLynx> find('something.otherthing' : value)
[20:31:19] <StephenLynx> forgot the {}
[20:33:04] <btwotch1> I am not sure what you mean as 'c.Find('tok.token': t)' does not compile :(
[20:36:27] <latestbot> Is it possible to store strings in Mongodb without double quotes
[20:36:28] <latestbot> ?
[20:36:33] <StephenLynx> latestbot no.
[20:36:51] <latestbot> okay
[20:36:59] <latestbot> Was just wondering
[20:37:06] <StephenLynx> btwotch1 try that on the terminal to see if works.
[20:37:16] <StephenLynx> I don't know how you will have to query at your driver.
[20:37:37] <btwotch1> StephenLynx: the query from the cli is included in the pastebin
[20:37:53] <StephenLynx> latestbot mongo uses BSON that is very similar to JSON. most rules of javascript apply.
[20:38:19] <StephenLynx> btwotch1 I see. then you only have to figure your driver.
[20:38:22] <StephenLynx> nvm
[20:38:33] <StephenLynx> most rules of JSON apply.
[20:39:41] <btwotch1> I am able to retrieve the structure when it is non-nested - I just can't get it to make the query nested
[20:40:21] <btwotch1> e.g. I tried: c.Find(bson.M{"tok": bson.M{"token": t}});
[20:42:26] <StephenLynx> you need to use a string for the field when using dot notation.
[20:45:39] <btwotch1> ah, you mean: c.Find(bson.M{"tok.token": t}); - but then I cannot even find the query in strace
[20:45:57] <btwotch1> and it does not work either
[20:48:13] <btwotch1> found it: http://pastebin.com/7FwFyMUC
[20:49:26] <StephenLynx> can't you just read your driver documentation?
[20:51:33] <btwotch1> it has no documentation about nested queries: https://www.google.de/?gfe_rd=cr&ei=7c0RVdOGA6mF8QfbkIDICA&gws_rd=ssl#q=site:labix.org%2Fmgo+nest
[20:52:03] <cheeser> oh, mgo...
[20:53:46] <StephenLynx> nested queries?
[20:53:48] <StephenLynx> wait, wait
[20:53:52] <StephenLynx> that is not possible.
[20:53:55] <StephenLynx> afaik.
[20:54:05] <StephenLynx> you are looking for sub-document queries
[20:54:08] <StephenLynx> not nested queries
[20:56:38] <btwotch1> Ooooh
[20:57:27] <cheeser> correct
[20:58:07] <btwotch1> strace helped, I just starred a bit longer and saw, that in the mgo query the '+' is missing "iN+ZWEE"
[20:58:22] <btwotch1> Thank you guys!
[21:06:49] <Progz> Hello, is it safe to upgrade from 2.6.8 to 3.0.1 ? Everything is compatible ?
[21:21:47] <imachuchu> Progz: I personally have no idea, but a bit of googling led to this: http://docs.mongodb.org/v3.0/release-notes/3.0-upgrade/
[21:23:26] <Progz> thanks imachuchu, already read this :)
[21:24:59] <imachuchu> Progz: ahh, well, that's about all I can do to help, sorry
[21:28:34] <Progz> I thought mongoDB put index into ram. I have one db with 23 indexes for a index size of 8 GB. But My ram consumed is 500 MB.
[21:28:43] <Progz> is it normal ?
[21:29:10] <Progz> I have no insert in this DB for the moment.
[21:34:32] <imachuchu> Progz: no, I think it's only best if the indexes fit into ram. If the db hasn't been queried for a while/at all since starting the server it's completely normal to have them not loaded yet
[21:35:38] <Progz> I have more than 1000 websites online. and I have a lot of request in my db. but nothing in ram
[21:38:31] <GothAlice> Progz: You might want to look at your "page fault" counts. If the number is high (which it very likely will be) then MongoDB can't efficiently utilize RAM because the kernel is constantly swapping out chunks behind-the-scenes. (This makes things very, very slow.)
[21:39:43] <GothAlice> Note that the majority of the memory MongoDB uses isn't allocated, it's memory-mapped on-disk files. These will typically show up not as "real" usage, but as "caches", because the kernel can freely move things around to free up the RAM if something else needs it.
[21:45:43] <Progz> ok GothAlice so when I see with mongostat in the column faults number around 132. is it a lot ?
[21:53:43] <GothAlice> Progz: What's the vsize being reported for you?
[21:54:02] <GothAlice> (res size would be the "working set" memory used to process queries, map/reduce, and manage connections, etc.)
[21:54:22] <GothAlice> Sorry, mapped, not vsize.
[21:56:56] <Progz> GothAlice: by mongostat ?
[21:57:10] <GothAlice> Aye.
[21:57:31] <Progz> GothAlice: http://pastebin.com/7RZyuBbD
[21:57:50] <GothAlice> :| How much RAM does that machine have?
[21:59:05] <Progz> http://pastebin.com/HKafqsue
[21:59:26] <GothAlice> Well, at these scales, it doesn't really matter how much RAM you have, it won't be enough. ;)
[21:59:26] <Progz> 30 GB of ram.
[21:59:40] <Progz> xd GothAlice
[22:00:02] <Progz> I need 186 GB ?
[22:00:36] <GothAlice> 30GB of RAM, assuming 10% overhead in general, means you have 14% as much RAM as you need for that dataset to fit entirely. What's the aggregate index size? (The size of all indexes across all collections?) Not sure the easiest way to get this information other than by getting db.stats() on each DB.
[22:01:06] <Progz> I only have 1 db (not conting db admin)
[22:01:27] <d4rklit3> hey
[22:01:36] <d4rklit3> on ubuntu how do i start the mongo service with --auth
[22:02:22] <GothAlice> Progz: Well then. The page faults are the kernel going out to disk to answer a query by MongoDB. This means it'll suffer amazingly bad performance, since you're going back to disk for potentially large chunks of data, in some cases, more than 200 times per second. You really, really, really want to invest in sharding at this point.
[22:02:51] <GothAlice> Ref: http://docs.mongodb.org/manual/core/sharding-introduction/
[22:03:19] <GothAlice> Notably also http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/ and http://docs.mongodb.org/manual/tutorial/choose-a-shard-key/ to ensure your data gets "evenly" spread.
[22:04:11] <GothAlice> This is MongoDB's official way to split data between multiple hosts in order to bring the per-host data set size below the RAM size of the individual hosts and regain optimal performance.
[22:04:32] <d4rklit3> nvm i got it was in the conf file, however I can't seem to be able to connect to the server with auth.
[22:04:41] <d4rklit3> i set up a user on the db in question and those credentials don't work
[22:04:45] <Progz> ok GothAlice I am following a mooc in the mongodb university. I was thinking about sharding but never try do configure one ^^
[22:05:07] <Progz> Thanks GothAlice ! I have some reading to do
[22:05:43] <Progz> GothAlice: result of db.stats => http://pastebin.com/LaGrdyZS
[22:06:00] <GothAlice> d4rklit3: Several potential issues: did you upgrade to 3.0 from any prior version? If your DB host is 3.0, are your client drivers up-to-date and compatible? (Both ref: http://docs.mongodb.org/manual/release-notes/3.0-compatibility/) Have you already added users? If not, you'll need to use the localhost exception to add the first user, which should be an admin. For this, ref: http://docs.mongodb.org/manual/tutorial/enable-authentication/
[22:06:16] <d4rklit3> i can connect fine if auth = false
[22:06:19] <d4rklit3> if i set it to true
[22:06:23] <d4rklit3> i can't use any of the users i created
[22:06:32] <Progz> I misunderstanding mongo documentation, I was thinking we can only put index in ram not all data.
[22:06:37] <GothAlice> d4rklit3: Please review the links I have provided to ensure you aren't running into compatibility issues.
[22:06:46] <d4rklit3> its a fresh install of 3.0
[22:07:29] <GothAlice> Progz: Alas, that doesn't quite work. Think of it like a "most recently used" cache of file pages in RAM. Some of those pages will be indexes, sure, but to the kernel, it has no way other than frequency of access to determine if a page is "hot or not", so in some cases your indexes might get paged out to make room for other data being paged in.
[22:07:43] <GothAlice> Progz: Thus, low memory situations are potentially catastrophically bad.
[22:08:04] <GothAlice> d4rklit3: And what is your client driver and version?
[22:08:13] <d4rklit3> that im not sure actually
[22:08:13] <Progz> ok ok
[22:08:17] <d4rklit3> i think i know the pissue
[22:08:20] <d4rklit3> issue
[22:08:22] <d4rklit3> reading the second lin
[22:08:23] <d4rklit3> k
[22:09:07] <GothAlice> d4rklit3: The other hint of mine was going to be "are you authenticating against the right database?" Since MongoDB users are local to a database, such as the "admin" DB, you may have to specify if you created them in the wrong one.
[22:09:29] <GothAlice> s/specify/specify when authenticating/
[22:10:06] <d4rklit3> i used robomongo with an ssh tunnel to create a user on my db
[22:10:19] <GothAlice> Please, for now, don't use RoboMongo with 3.0.
[22:10:47] <Progz> GothAlice: Arf... do you know another GUI client ?
[22:10:47] <d4rklit3> ruh roh!
[22:10:51] <GothAlice> In the last few days I've seen it doing terrible, terrible things, like creating duplicate indexes on _id, failing 3.0-style authentication, etc.
[22:11:36] <Progz> GothAlice: since today robomongo doesn't displayanymore my indexe xD
[22:12:19] <GothAlice> Seriously, what does everyone find so attractive about robomongo over a syntax highlighting shell? I find the "(0) {…}" list items to be purposefully obstructing, not helpful. :/ It hides information from you, and doesn't empower you in terms of querying any more than a bare shell. :/
[22:12:37] <d4rklit3> getting auth failed through terminal
[22:12:59] <d4rklit3> my local mongo is 2.4
[22:12:59] <GothAlice> d4rklit3: What's in your mongod logs?
[22:13:00] <Progz> GothAlice: I am not an expert of mongo so robomongo help me find right command :)
[22:13:06] <d4rklit3> on the server?
[22:13:07] <GothAlice> d4rklit3: That would be your problem, nevermind.
[22:13:21] <d4rklit3> well i need to connect to it regardless
[22:13:22] <GothAlice> Stop trying to use ancient (no longer supported, two major versions behind) tools against the latest and greatest server.
[22:13:34] <GothAlice> It simply won't work.
[22:13:48] <d4rklit3> i just need a driver
[22:13:58] <GothAlice> If you want authentication in 3.0, you need _all_ of your tools to be 3.0-aware and 3.0-compatible.
[22:14:03] <GothAlice> This includes command-line tools.
[22:14:26] <d4rklit3> ugh
[22:14:26] <d4rklit3> ok
[22:14:50] <d4rklit3> so server 1 is node app, server 2 is mongo. server 1 app needs mongo. it uses some adaptor for it
[22:14:55] <d4rklit3> whatever keystone.js uses
[22:15:32] <d4rklit3> so robomongo would be able to connect authless fine but not with auth?
[22:15:41] <GothAlice> Progz: In terms of finding commands, nothing beats the documentation, which also explains the output of those commands. ;) Even I refer back to the documentation a lot. (There's a reason I'm so quick on the docs.mongodb.org links…)
[22:15:43] <GothAlice> ^_^
[22:16:17] <GothAlice> d4rklit3: Indeed. AFIK it hasn't been updated to the 3.0 authentication scheme yet.
[22:16:29] <d4rklit3> son of a bithc
[22:17:04] <d4rklit3> they are working on it lol
[22:17:05] <GothAlice> However, I'll repeat my warning from earlier: I've seen robomongo do *destructively* terrible things to people's data in the last week.
[22:17:19] <d4rklit3> i usually just use it to visualize stuff
[22:17:26] <GothAlice> {…}!
[22:17:27] <d4rklit3> just read ops
[22:17:29] <GothAlice> T_T
[22:17:30] <d4rklit3> no writes
[22:17:32] <d4rklit3> :((((
[22:18:31] <GothAlice> That's one of the issues encountered: just connecting and fetching a collection created a duplicate _id index for one user. (There may, or may not have been other things going on, but _not_ connecting robomongo after cloning the collection from one to another in Python didn't exhibit the duplicate index, firing robomongo up, it spontaneously appeared.)
[22:18:44] <d4rklit3> keystone.js uses mongoose
[22:19:09] <d4rklit3> i want to see if the damn framework even supports 3.0
[22:19:24] <d4rklit3> the IT guys instaleld 3.0 on this server and left me with the config!! im just a programmer!
[22:19:28] <d4rklit3> i never said I do sysadmin shit
[22:19:29] <GothAlice> At least mongoose issues have historically been "it just does things weirdly and isn't particularly well documented when they choose to be wacky".
[22:19:38] <d4rklit3> mongoose is great
[22:19:42] <d4rklit3> don't be bad mouthing mongoose
[22:20:05] <d4rklit3> it uses mongodb-native
[22:20:06] <GothAlice> d4rklit3: After the 16th user storing ObjectIds as strings came through the channel, mongoose got added to my blacklist. ;)
[22:20:28] <d4rklit3> user error
[22:20:29] <d4rklit3> dude
[22:20:38] <d4rklit3> people don't read the docs
[22:20:42] <d4rklit3> thats nto mongoose fault
[22:20:43] <GothAlice> Additionally, failure of documentation, which at several points effectively suggests a broken approach.
[22:21:02] <d4rklit3> i mean ObjectID is standard for mongo right?
[22:21:09] <d4rklit3> for PKs
[22:21:19] <d4rklit3> at least thats how i understand it
[22:21:29] <d4rklit3> but im nowhere close to being a DBA
[22:21:38] <GothAlice> Indeed. Efficient, meaningful 12-byte storage of creation timestamp, application server ID, process ID, and auto-increment (per server process) counter.
[22:21:53] <d4rklit3> yeah
[22:21:56] <GothAlice> (That's sortable and range queryable, treatable as "insertion time".)
[22:21:57] <d4rklit3> ^
[22:22:11] <d4rklit3> anyway
[22:22:17] <d4rklit3> let me just test it in my app
[22:22:30] <d4rklit3> because why would that occur to me : /
[22:22:34] <Progz> I am going to try "mongovue"
[22:22:34] <GothAlice> Turn it into a string (as many mongoose users do), or worse, mix real ObjectIds and strings, and the support incidents begin. ;)
[22:23:50] <GothAlice> d4rklit3: http://showterm.io/12677ca6c6020f694119b :)
[22:24:49] <GothAlice> (They're not internally comparable.)
[22:25:22] <GothAlice> (Let alone the string taking _more_ than twice the space.)
[22:26:22] <d4rklit3> wtf os tjos
[22:26:24] <d4rklit3> err
[22:26:27] <d4rklit3> wtf did you pase
[22:26:27] <d4rklit3> lol
[22:26:36] <d4rklit3> showterm.io
[22:26:44] <d4rklit3> neat
[22:29:13] <d4rklit3> doesn't work in the app
[22:30:18] <joannac> what doesn't work in the app?
[22:32:12] <GothAlice> d4rklit3: Yeah, pardon my Python-esque call syntax in the term. Pretend those are "{_id: " instead of "_id=". ;)
[22:58:04] <ThisIsDog> I'm using PyMongo 2.8 in my application. When I add a document to the database and it already exists, a DuplicateKeyError exception is thrown. When I do e.details on the exception, I'm always getting None. Under what circumstances is details not None?
[22:58:58] <ThisIsDog> I'm trying to get the document existing in the database that caused the DuplicateKeyError
[23:13:03] <afroradiohead> _id is supposed to be unique right?
[23:13:20] <afroradiohead> i mean, does _id in collections come default with a unique index?
[23:13:39] <ehershey> yes
[23:14:48] <ehershey> you can tell by running in a shell: db.SomeRandomMadeUpNewCollection.insert({}); db.SomeRandomMadeUpNewCollection.getIndexes()
[23:15:01] <ehershey> and it will show the new _id index on the new collection
[23:15:25] <afroradiohead> oh gotchya, oops then i had it storing as a string
[23:23:19] <afroradiohead> upsert is just an amazing feature