PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 27th of November, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:05:02] <qswz> [java] how do you know if a save or update worked well, writeResult.getError().equals("") ??
[00:09:40] <cheeser> it'll be null
[00:38:19] <qswz> ok thanks
[01:32:31] <qswz> if I want to update a subobject of a given document thas has 10 fields, but I don't want to lose them just do an upsert (replacing the one that match), I guess I should do db.coll.update({_id: theId}, {_id: theId, foo: 1, bar: 12}, {upsert: true}) and not db.coll.save({_id: theId, foo: 1, bar: 12})
[01:33:48] <joannac> that won't work. You need a $set I think
[01:34:04] <qswz> but $set is for a single field I thought
[01:34:20] <retran> a field can have fields wtihin
[01:34:27] <retran> (object)
[01:34:46] <qswz> ok thanks gonna test
[01:35:35] <qswz> "The method updates specific fields if the <update> parameter contains only update operator expressions, such as a $set operator expression. Otherwise the method replaces the existing document."
[01:35:43] <qswz> yea what you said
[02:02:22] <decimal_> :w
[05:09:55] <Fandekasp> hi there
[05:12:07] <Fandekasp> How can I update all users so that user.items changes from {'item1': 5, 'item2': 5} to
[05:12:24] <Fandekasp> {'item1': [5, ''], 'item2': [5, '']}
[05:13:02] <Fandekasp> I can do a forEach to iterate over each users, but I don't know how to iterate over each item of each user
[05:13:13] <Fandekasp> any idea ?
[05:17:30] <LouisT> I'm not sure if Object.keys() works but you could do that with forEach if it does
[05:22:52] <LouisT> Fandekasp: something like: var obj = {'item1': 5, 'item2': 5}; Object.keys(obj).forEach(function(key) { obj[key] = [obj[key],'']; });
[05:24:33] <Fandekasp> LouisT: I just tried this http://sprunge.us/WhAQ but it fails. Will try your idea :)
[05:31:09] <Fandekasp> LouisT: so your example looks great, I updated my code as follow http://sprunge.us/WSEC. But when I do "mongo db script.js", then "mongo db" and find a user to examine it, I don't see the changes done :S
[05:31:42] <LouisT> Fandekasp: ah, you're not actually updating the objects
[05:31:47] <LouisT> you have to do that
[05:31:52] <LouisT> my example was just so you can get the idea
[05:32:00] <Fandekasp> lol
[05:32:04] <Fandekasp> ok
[05:32:08] <LouisT> you still have to do db.users.update
[05:32:38] <Fandekasp> in the function(key) got it
[05:37:59] <Fandekasp> LouisT: sorry :( I updated with a db.users.update (http://sprunge.us/WSEC), but I still don't see any changes in the db. Do you see what I'm doing wrong ?
[05:38:22] <Fandekasp> http://sprunge.us/eZie
[05:39:35] <LouisT> Fandekasp: yes, you're setting the key incorrectly
[05:39:45] <Fandekasp> oh
[05:39:47] <Fandekasp> -user.
[05:39:49] <LouisT> so with this
[05:39:51] <LouisT> {key: {'growth': user.items.pets[pet], 'name': ''}
[05:40:00] <LouisT> "key" should be what ever key, correct?
[05:40:45] <Fandekasp> LouisT: I changed to var key = 'items.pets.' + pet, but..
[05:41:02] <Fandekasp> normally the key is a dotted string starting from user
[05:41:29] <LouisT> Fandekasp: just a moment
[05:42:09] <LouisT> Fandekasp: i think it's like this http://thepb.in/529585ca74d92
[05:44:41] <igors> hello. I have a question about a query I want to perform but I don't know how. I have this document (http://dpaste.com/1484607/) and given an _id (let's say "52957741140c1f62aa39744e") I want to know if this _id is inside one of the nested docs.
[05:59:49] <Fandekasp> LouisT: lol I found it
[05:59:58] <LouisT> Fandekasp: what was it?
[06:00:08] <Fandekasp> so you were right, first I need to build the dict outside of the udpate, dunno why
[06:00:47] <Fandekasp> other problem was the initial find exists request. Needed to search 'items.pets' instead of 'pets', so the script was never returning any user ^^
[06:00:57] <Fandekasp> Thank you very much for your help !
[09:41:55] <jwr> trying to understand how to use the aggregation pipeline for a small problem: http://pastebin.com/LTXpBCc1
[09:42:13] <jwr> help appreciated
[10:23:34] <KamZou> Hi, i've got a ReplicaSet setup in production and i'm wondering if i could upgrade the MongoDB version without stopping the production ? (from 2.2.3 to 2.4.8)
[10:31:08] <cheeser> KamZou: http://docs.mongodb.org/manual/tutorial/upgrade-revision/#upgrade-replica-set
[10:33:14] <KamZou> thanks cheeser
[10:39:49] <robothands> hi, can anyone give me an example of the correct syntax of db.fsynclock() ?
[10:40:03] <robothands> db.fsyncLock() ^^
[10:40:29] <robothands> i need to set lock to true
[10:43:46] <robothands> j ##boroughdungeon
[10:55:31] <tek> hey guys
[10:55:40] <ron> hey jude
[10:56:01] <tek> is there any way i can restrict read operation over collections. What i mean for example 1 person can see these collections and another these ..
[10:57:15] <tek> oi
[10:58:09] <joannac> tek: Not yet http://docs.mongodb.org/master/release-notes/2.6/#collection-level-access-control
[10:58:21] <Derick> tek: afaik you can only do that per database, and what joannac says
[10:58:35] <tek> yeah i know about the database ..
[10:58:38] <tek> cheers guys
[10:59:35] <Derick> guy and girl
[11:01:08] <tek> well guys is not related only to males it's like peeps / friends etc..
[11:01:16] <tek> mates
[11:26:46] <qswz> Could anyone have a look at http://vpaste.net/NQePo please? this sums up what I'm trying to do since hours
[11:27:29] <qswz> damn it works with just $set
[11:28:24] <qswz> ok nvm then, now going to do it with java driver..
[11:28:47] <Nodex> $set
[11:30:17] <qswz> yep
[11:35:41] <qswz> ahhh, also mod on _id is not allowed, (java driver wasn't giving me back that error)
[11:35:52] <qswz> fine
[11:36:03] <cheeser> you can't change the ID ofa document
[11:37:14] <qswz> sounds logic
[11:39:01] <srajbr> hello I am getting a error: DBException 13636: file /var/lib/mongodb/site.6 open/create failed in createPrivateMap
[11:39:16] <srajbr> please help me to solve this problem
[11:39:31] <Nodex> disk space?
[11:39:32] <cheeser> i'm heading out to lunch but maybe disk space issues?
[11:39:35] <Derick> are you on a 32bit platform?
[11:39:42] <Derick> it's that, or memory mapping failed
[11:41:31] <srajbr> 32bit
[11:43:33] <Derick> srajbr: can you pastebin an "ls /var/lib/mongodb/site.*" ?
[11:43:44] <Derick> and add: uname -a
[11:45:01] <srajbr> Derick i have site.0-6 and site.ns
[11:45:06] <srajbr> do you need all of them
[11:47:43] <srajbr> Derick: here is it http://pastebin.com/pjXzKhUj
[11:48:38] <srajbr> forgot to add uname -a
[11:48:43] <srajbr> its: Linux madroids 3.8.0-25-generic #37-Ubuntu SMP Thu Jun 6 20:47:30 UTC 2013 i686 i686 i686 GNU/Linux
[11:49:42] <Derick> srajbr: sorry, ls -l
[11:49:54] <Derick> and how do you know it's 32bit?
[11:51:59] <srajbr> Derick: http://pastebin.com/aqhc7x9b
[11:54:11] <srajbr> I have installed 32 bit ubuntu version
[11:54:37] <Derick> right, so you're just running out of memory space.
[11:54:57] <srajbr> uhmm.. any solution
[11:54:58] <Derick> You need a 64bit OS if you want to use more than 2GB of memory.
[11:55:55] <srajbr> I cant install 64bit OS now... but need mongodb running
[11:56:16] <Derick> well, it runs
[11:56:17] <Zelest> Then I pity your future problems.
[11:56:26] <Derick> you're just running into limitations because your 32bit OS
[11:57:26] <srajbr> I cant event get show dbs :( ;-(
[11:57:41] <jordz> I'm having issues attempting to pull an array element out of a subdocument
[11:58:10] <jordz> Field : { ArrayField: [ "value", "value2"] }
[11:58:19] <jordz> I can't pull value out
[11:58:23] <jordz> of the ArrayFiled
[11:59:47] <jordz> db.col.update({ "Field.ArrayField" : "value"}, { $pull : {"Field.ArrayField": "value""}, false, true)
[11:59:55] <jordz> should that work?
[12:00:34] <Derick> that looks ok
[12:00:51] <jordz> It doesn't pull the string out of the array :/
[12:01:13] <jordz> Hmm
[12:01:26] <jordz> Would it be any different if the schema was this
[12:02:00] <Derick> your example works just fine
[12:02:01] <jordz> { Field: [ { ArrayField: [ ""value ] } ]}
[12:02:12] <jordz> So the array is in an array of objects
[12:02:27] <Derick> http://pastebin.com/y5xC7ecJ
[12:02:30] <Derick> jordz: ^^
[12:05:13] <jordz> Derick, what about this? http://pastebin.com/fhF71yn9
[12:05:24] <jordz> That update does't work
[12:05:34] <jordz> doesn't*
[12:06:02] <jordz> Do I need to use a $?
[12:06:24] <jordz> Ahh yes
[12:06:25] <jordz> I do!
[12:06:31] <jordz> Figured it out, many thanks!
[12:06:36] <Derick> yes
[12:06:43] <Derick> jordz: it's easier to test then to ask ;-)
[12:06:58] <jordz> I know I know, I actually tried this yesterday
[12:07:01] <jordz> couldn't figure it out
[12:07:05] <Derick> :-)
[12:07:06] <jordz> hence the asking
[12:07:09] <jordz> but thanks :)
[12:40:38] <robothands> hi. im restoring a secondary from file backup. the secondary is already added to my replica set config, but the mongod process is stopped
[12:41:27] <robothands> question: when the file restore is completed, is it better to just start the mongod process and let mongod figure it out, or remove the node from replica set config and add it back in with rs.add?
[12:47:18] <kali> robothands: just start mongod
[12:50:09] <robothands> thanks
[12:50:17] <robothands> one more thing
[12:50:41] <robothands> on the fsyncLock page it says: "You may continue to perform read operations on a database that has a fsync lock. However, following the first write operation all subsequent read operations wait until you unlock the database"
[12:50:58] <robothands> i do not understand this, how can there be a first write operation if the database is locked?
[12:53:50] <cheeser> the lock might be lazy. why lock until you're ready to write?
[12:57:15] <robothands> surely, i lock, transfer files to secondary, unlock?
[14:02:40] <qswz> there isn't much difference between a findOne(query1, query2) with a projection and a find($and: {query1, query2})?
[14:03:38] <qswz> in the first case in case return something not null, but you can manage it yourself
[14:03:39] <Derick> you don't need $and there
[14:03:47] <Derick> also, findOne() doesn't work like that
[14:28:47] <tiller> hi there
[14:31:34] <tiller> Do anyone have an idea of how efficiently mongodb store things? I want to store a field "type" into each documents of a collection. Values are taken from {"organisation", "project", "service", "user"}. If I use the string itself, does Mongodb store the same value again and again?
[14:31:51] <Derick> yes
[14:31:53] <tiller> or should I just use number as for example 0= "organisation", 1="project", etc.
[14:33:33] <tiller> Hmm, so it stores the value again and again =/ And as far as I can see mongodb doesn't have any ENUM structure
[14:41:38] <Nodex> yup
[14:44:58] <qswz> Derick so you're saying ..find({foo: "bar", stuff: {$ne:0} } ) is the same as ..find($and: [{foo: "bar"}, {stuff: {$ne:0}} ] ) ?
[14:45:35] <qswz> or I misunderstood
[14:46:03] <Nodex> mongo is AND by default
[14:46:27] <Nodex> i/e both parameters HAVE to be satisfied to return results
[14:47:09] <qswz> ok, am I right saying $and does the same than putting the raw Object?
[14:47:09] <Derick> qswz: yes
[14:47:12] <qswz> ok
[14:47:49] <Nodex> that's also NOT how $and works
[14:48:08] <Nodex> all calls to find() / findOne() etc MUST have a valid object i/e start with {
[14:48:41] <qswz> yes forgot the { before $and
[14:48:52] <qswz> thanks right
[15:56:55] <mikejw> is it possible to do a combination of a 'findone' type query with an update with setoninsert and an upsert?
[15:57:04] <mikejw> ..I'm using PHP
[16:02:14] <Derick> mikejw: uh, what are you trying to do?
[16:03:08] <mikejw> I want to call update with the upsert option...
[16:03:58] <mikejw> and the query needs to only return the first matching document (the query will look for timestamps within the last 24 hours)
[16:04:21] <mikejw> am I missing something.. will it do that by default anyway?
[16:04:44] <cheeser> findOne() will do that.
[16:04:55] <Derick> cheeser: but it won't update anything
[16:05:02] <mikejw> yeah...
[16:05:06] <cheeser> no, it won't.
[16:05:22] <mikejw> I'm not saying I know this is possible.. someone else has put me onto this mission :)
[16:05:26] <tiller> He wants to update things, and retrieve what has been updated, if I'm not wrong
[16:05:27] <cheeser> but update by default only updates the first document
[16:05:30] <Derick> mikejw: can you assemble: 1. a starting document, 2. what you want changed, and 3. your expected output in a pastebin?
[16:05:47] <mikejw> no not really
[16:06:00] <mikejw> cheeser: is that true?
[16:06:13] <Derick> mikejw: it is
[16:06:18] <mikejw> :)
[16:06:40] <mikejw> awesome
[16:07:23] <cheeser> it doesn't upsert by default, though.
[16:07:40] <cheeser> so, db.collection.update({}, {}, true, false)
[16:08:24] <mikejw> no that's ok :)
[16:29:45] <rand0m87> Hi there
[16:30:15] <Derick> hi...
[16:33:01] <rand0m87> I have one problem with java driver for mongo. I think it's not Java issue but still. If we try to convert JsonNode or String to DBObject we can have there a date, often in ISO8601 format. How can we parse it correctly with standard library or we always need to write our own specific parser? Thanks
[16:35:16] <tiller> rand0m87> Maybe I'm totally wrong but I've always been using Date myDate = (Date) theDBObject.get("theField");
[16:36:55] <rand0m87> If you take the value from mongo object it's right, I think. But I ask about conversion, so we do not know about value type
[16:37:18] <rand0m87> we know, but not about date
[16:37:29] <rand0m87> it's more correct
[16:37:30] <Nomikos> I have some fields cat1, cat2, cat3, .. and have decided they should start at cat0. Can I rename them all at the same time without them overwriting one another?
[16:38:22] <cheeser> you could do it with an aggregation pipeline.
[16:38:44] <cheeser> $project : { cat0 : "$cat1", cat1 : "$cat2" ... }
[16:38:56] <cheeser> but unfortunately, you'd need to 2.6 write back toa colleciton
[16:39:51] <Nomikos> I'll .. do separate updates I guess. I've only used aggregation a single time
[16:40:31] <Nodex> if you're renaming things it's probably safest to take each doc and do it client side
[16:40:43] <Nodex> findAndModify() it ;)
[16:41:05] <Nomikos> I'm running it on a copy and will rename the collections afterwards
[16:41:17] <Nomikos> it's not live/changing data
[16:42:01] <Nodex> robably safest
[16:42:03] <Nodex> +p
[16:42:12] <cheeser> safestp? :D
[16:42:17] <Nodex> :P
[16:42:28] <Nodex> that's a new trending hashtag
[16:42:35] <Nodex> you heard it here first LOL
[16:47:16] <jwr> I'm trying to understand how to use the aggregation pipeline when I need to count and group by string occurence in an array field: http://pastebin.com/LTXpBCc1
[16:49:00] <KamZou> Hi, i've an issue with 3 mongodb nodes in a ReplicaSet Architechture (not sharded) 3RSM (1 master, 2 secondaries). Each item see other in sync state but itself is uptodate
[16:49:01] <bobbytek> What does command denied: { getLog: "startupWarnings" } mean?
[16:49:05] <bobbytek> In the mongo log
[16:49:45] <tiller> jwr> sort by descending "num" and then group using $first?
[16:50:30] <Derick> hey Rozza
[16:50:51] <Rozza> howdy Derick
[16:51:37] <Derick> pretty good - confusing svn and git though!
[16:52:53] <jwr> tiller: I was hoping to avoid sort but will give that idea a shot, thanks
[16:53:26] <rcp> Derick :x
[16:53:53] <tiller> jwr> I'm a bit a "newbie" with mongodb, it just the way I would have tried it. Maybe you'll have some more answer from others :)
[16:56:12] <cheeser> http://xkcd.com/1296/
[17:40:11] <jordanappleson> In the mongodb docs when using map-reduce it specifically states that at no point should the map, reduce or finalise method access the database
[17:40:47] <jordanappleson> Is that a given?
[17:41:11] <jordanappleson> Should I run a map reduce and process the day through seperate means into different collections after
[17:41:12] <jordanappleson> ?
[17:41:43] <jordanappleson> data*
[18:22:50] <jwr> when profiling, does the system.profile.millis include the time for json printing?
[18:35:36] <bjori> jwr: what do you mean json printing?
[18:35:50] <jwr> just printing to the console
[18:35:57] <jwr> in mongo
[18:40:29] <bjori> jwr: no, the profiler collection only includes execution in the database, not the shell
[18:42:22] <jwr> bjori: that makes sense, thanks
[20:43:55] <scrdhrt> Can I create an index on a field which name is unique (a hash), but I know the "depth" (ie {yes: {myHash: {...}}}) of?
[21:07:37] <angasulino> scrdhrt, I don't think that's possible (just guessing), but you can always have a named field with the hash in it... also, why do you have a hash like that anyway?
[21:11:16] <scrdhrt> angasulino: I'm hashing a file name, and storing information about the file below:{ files: { fileHash: {filePath: ..., size: ..., otherStuff: ...}}}
[21:15:20] <angasulino> ok, why are you hashing the file name?
[21:17:02] <scrdhrt> At the time it sounded like a good idea, but even if I enter myFile.txt instead of the hash, the question stands since
[21:22:49] <angasulino> scrdhrt, heh well, add a new field that is equal to the hash, index that
[21:26:22] <scrdhrt> But then I will loose out on fast querys to the properties of the file, which resides under the hash, won't I?
[21:26:32] <scrdhrt> I guess I have to rethink the structure
[21:28:37] <angasulino> scrdhrt, no, because the new field would be in the same document
[21:28:52] <angasulino> scrdhrt, you're getting the document as a result
[21:29:26] <scrdhrt> Yeah, ofc!
[21:30:18] <scrdhrt> I've been banging my head against the hash problem for too long, I forgot I can index the document outside of files structure
[21:30:22] <scrdhrt> Thanks :)
[21:30:25] <angasulino> np :)
[22:49:07] <GeraintJones> Hi, does anyone have any docs/instructions to reset the admin user on a replica set ?
[22:50:46] <cheeser> http://terminalinflection.com/mongo-db-force-admin-removal/
[23:05:04] <gmcinnes> hi all.
[23:05:23] <gmcinnes> anyone know if there is a way to turn off TCP in mongo and *only* use unix domain sockets?
[23:49:01] <Industrial> How do i set a single field index with node-mongo-native ?
[23:49:32] <Industrial> Also, do I just re-issue this command every time my server (mongo client) runs?
[23:51:12] <Industrial> ah, way down here; http://mongodb.github.io/node-mongodb-native/api-generated/collection.html#ensureindex