[01:32:31] <qswz> if I want to update a subobject of a given document thas has 10 fields, but I don't want to lose them just do an upsert (replacing the one that match), I guess I should do db.coll.update({_id: theId}, {_id: theId, foo: 1, bar: 12}, {upsert: true}) and not db.coll.save({_id: theId, foo: 1, bar: 12})
[01:33:48] <joannac> that won't work. You need a $set I think
[01:34:04] <qswz> but $set is for a single field I thought
[01:34:20] <retran> a field can have fields wtihin
[01:35:35] <qswz> "The method updates specific fields if the <update> parameter contains only update operator expressions, such as a $set operator expression. Otherwise the method replaces the existing document."
[05:24:33] <Fandekasp> LouisT: I just tried this http://sprunge.us/WhAQ but it fails. Will try your idea :)
[05:31:09] <Fandekasp> LouisT: so your example looks great, I updated my code as follow http://sprunge.us/WSEC. But when I do "mongo db script.js", then "mongo db" and find a user to examine it, I don't see the changes done :S
[05:31:42] <LouisT> Fandekasp: ah, you're not actually updating the objects
[05:32:08] <LouisT> you still have to do db.users.update
[05:32:38] <Fandekasp> in the function(key) got it
[05:37:59] <Fandekasp> LouisT: sorry :( I updated with a db.users.update (http://sprunge.us/WSEC), but I still don't see any changes in the db. Do you see what I'm doing wrong ?
[05:42:09] <LouisT> Fandekasp: i think it's like this http://thepb.in/529585ca74d92
[05:44:41] <igors> hello. I have a question about a query I want to perform but I don't know how. I have this document (http://dpaste.com/1484607/) and given an _id (let's say "52957741140c1f62aa39744e") I want to know if this _id is inside one of the nested docs.
[06:00:08] <Fandekasp> so you were right, first I need to build the dict outside of the udpate, dunno why
[06:00:47] <Fandekasp> other problem was the initial find exists request. Needed to search 'items.pets' instead of 'pets', so the script was never returning any user ^^
[06:00:57] <Fandekasp> Thank you very much for your help !
[09:41:55] <jwr> trying to understand how to use the aggregation pipeline for a small problem: http://pastebin.com/LTXpBCc1
[10:23:34] <KamZou> Hi, i've got a ReplicaSet setup in production and i'm wondering if i could upgrade the MongoDB version without stopping the production ? (from 2.2.3 to 2.4.8)
[10:56:01] <tek> is there any way i can restrict read operation over collections. What i mean for example 1 person can see these collections and another these ..
[12:40:38] <robothands> hi. im restoring a secondary from file backup. the secondary is already added to my replica set config, but the mongod process is stopped
[12:41:27] <robothands> question: when the file restore is completed, is it better to just start the mongod process and let mongod figure it out, or remove the node from replica set config and add it back in with rs.add?
[12:50:41] <robothands> on the fsyncLock page it says: "You may continue to perform read operations on a database that has a fsync lock. However, following the first write operation all subsequent read operations wait until you unlock the database"
[12:50:58] <robothands> i do not understand this, how can there be a first write operation if the database is locked?
[12:53:50] <cheeser> the lock might be lazy. why lock until you're ready to write?
[12:57:15] <robothands> surely, i lock, transfer files to secondary, unlock?
[14:02:40] <qswz> there isn't much difference between a findOne(query1, query2) with a projection and a find($and: {query1, query2})?
[14:03:38] <qswz> in the first case in case return something not null, but you can manage it yourself
[14:31:34] <tiller> Do anyone have an idea of how efficiently mongodb store things? I want to store a field "type" into each documents of a collection. Values are taken from {"organisation", "project", "service", "user"}. If I use the string itself, does Mongodb store the same value again and again?
[16:33:01] <rand0m87> I have one problem with java driver for mongo. I think it's not Java issue but still. If we try to convert JsonNode or String to DBObject we can have there a date, often in ISO8601 format. How can we parse it correctly with standard library or we always need to write our own specific parser? Thanks
[16:35:16] <tiller> rand0m87> Maybe I'm totally wrong but I've always been using Date myDate = (Date) theDBObject.get("theField");
[16:36:55] <rand0m87> If you take the value from mongo object it's right, I think. But I ask about conversion, so we do not know about value type
[16:37:30] <Nomikos> I have some fields cat1, cat2, cat3, .. and have decided they should start at cat0. Can I rename them all at the same time without them overwriting one another?
[16:38:22] <cheeser> you could do it with an aggregation pipeline.
[16:47:16] <jwr> I'm trying to understand how to use the aggregation pipeline when I need to count and group by string occurence in an array field: http://pastebin.com/LTXpBCc1
[16:49:00] <KamZou> Hi, i've an issue with 3 mongodb nodes in a ReplicaSet Architechture (not sharded) 3RSM (1 master, 2 secondaries). Each item see other in sync state but itself is uptodate
[16:49:01] <bobbytek> What does command denied: { getLog: "startupWarnings" } mean?
[17:40:11] <jordanappleson> In the mongodb docs when using map-reduce it specifically states that at no point should the map, reduce or finalise method access the database
[20:43:55] <scrdhrt> Can I create an index on a field which name is unique (a hash), but I know the "depth" (ie {yes: {myHash: {...}}}) of?
[21:07:37] <angasulino> scrdhrt, I don't think that's possible (just guessing), but you can always have a named field with the hash in it... also, why do you have a hash like that anyway?
[21:11:16] <scrdhrt> angasulino: I'm hashing a file name, and storing information about the file below:{ files: { fileHash: {filePath: ..., size: ..., otherStuff: ...}}}
[21:15:20] <angasulino> ok, why are you hashing the file name?
[21:17:02] <scrdhrt> At the time it sounded like a good idea, but even if I enter myFile.txt instead of the hash, the question stands since
[21:22:49] <angasulino> scrdhrt, heh well, add a new field that is equal to the hash, index that
[21:26:22] <scrdhrt> But then I will loose out on fast querys to the properties of the file, which resides under the hash, won't I?
[21:26:32] <scrdhrt> I guess I have to rethink the structure
[21:28:37] <angasulino> scrdhrt, no, because the new field would be in the same document
[21:28:52] <angasulino> scrdhrt, you're getting the document as a result