PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 23rd of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:31:09] <ron_frown> I am building a system from scratch and really really considering mongo... the one and only thing I dont know about is what the reporting requirements will be down the road
[02:31:23] <ron_frown> I am looking at a couple of the aggregation framework queries
[05:02:48] <ngoyal> have the issues brought up by aphyr back in 2013 been addressed or are there recommendations that don't kill performance? http://aphyr.com/posts/284-call-me-maybe-mongodb
[06:13:55] <amitprakash> Hi, how do I query for a field whose value is either False or null[including those documents where the field does not exist]
[06:19:29] <joannac> $or with $exists
[06:20:14] <amitprakash> joannac, $or:[{key: false}, {key: null}] makes more sense, no?
[06:20:23] <amitprakash> I was wondering if there is a better way
[06:21:25] <joannac> oh yeah, i always forget about null including $ne
[06:21:54] <joannac> and no, i don't believe there's an easier way
[06:25:53] <amitprakash_> joannac, hmm, i wanted to avoid the $or, but okay
[06:29:44] <Faisal> hello guys....I want to use Django with mongo db
[06:30:44] <Faisal> can you guys suggest a future proof solution to do so
[06:30:50] <Faisal> I am using http://django-mongodb-engine.readthedocs.org/en/latest/index.html
[06:31:04] <Faisal> how ever its dependent django-non-rel
[06:31:27] <Faisal> so some of my collegues suggested not to go for this as django-non-rel is deprecated
[07:16:18] <Faisal> Any suggestions
[07:18:21] <Nodex> what's the question/
[07:18:23] <Nodex> + ?
[07:18:48] <Zelest> I want to type something perverted.. but that's so off-topic :(
[07:19:05] <Nodex> ol
[07:19:06] <Nodex> lol
[07:19:14] <Faisal> I want to use Django with mongo db. can you guys suggest a future proof solution to do so ? I am using http://django-mongodb-engine.readthedocs.org/en/latest/index.html
[07:19:31] <Faisal> how ever its dependent django-non-rel. so some of my collegues suggested not to go for this as django-non-rel is deprecated
[07:20:23] <Nodex> I don't use Django sorry can't help
[07:47:42] <FurqanZafarIQ> hello
[07:50:35] <bjergsen> hey is there a way update a document's one field without changing other fields ?
[07:50:59] <rspijker> bjergsen: $set
[07:51:54] <bjergsen> rspijker: looking on it , thanks
[07:54:57] <bjergsen> rspijker: thanks, that what i was looking for
[07:58:39] <rspijker> cool
[12:17:38] <Zelest> ugh, I have 4 mongodb nodes in my replicaset (was adding 2 more to my set of 3 servers) and now they're all "secondary"
[12:17:56] <Zelest> ah, nvm.. it was temporary
[12:17:58] <Zelest> *pewh*
[12:23:15] <goldstar> I am running v2.6 and keep getting "Can't canonicalize query: BadValue $in needs an array" when using a 3rd party webapp. It appears mongo doesnt like it when an array starts from 0; is there anyway to disable this 'strictness' ?
[12:24:01] <Nodex> can you pastebin your query?
[12:24:32] <goldstar> i dont know which query; its a 3 party apprd
[12:24:40] <goldstar> i just need it to work
[12:25:09] <goldstar> app*
[12:27:43] <Nodex> can't really help then sorry
[12:28:00] <joannac> goldstar: maybe you should check with the devs of the app then, if they're generating bad queries?
[12:29:14] <rspijker> “when an array starts from 0”. That makes little sense in the way mongo arrays behave… You don;t explicitly provide indices. An array is just, a list, really…
[12:29:15] <joannac> although i also don't know what 'strictness' you're talking about...
[12:29:22] <rspijker> are you maybe passing in the wrong thing goldstar ?
[12:29:26] <joannac> ...yeah, what rspijker said
[12:31:56] <rspijker> I’m seeing queries in my logs… Like: Wed Jul 23 14:28:55.168 [conn314053] getmore db.coll query: {actual query here} cursorid:id ntoreturn:0 keyUpdates:0 locks(micros) r:187481 nreturned:2919 reslen:500424 187ms
[12:32:04] <rspijker> eventhough I have profiling turned off
[12:32:15] <rspijker> from what I can gather from the docs, this shouldn;t be happening?
[12:34:22] <Nodex> past the 100ms slow log threshold?
[12:34:38] <rspijker> looks like it, yes
[12:34:43] <Nodex> then I would say that's why
[12:35:03] <rspijker> ah
[12:35:04] <Nodex> I've never ran profiling but I get log entries like that when queries are slow
[12:35:05] <rspijker> I see
[12:35:16] <Nodex> I think you can raise that slow log value
[12:35:19] <rspijker> yeah, it’s listed in the docs under the slowms setting
[12:36:14] <rspijker> that’s a bit… unclear
[12:36:17] <rspijker> thanks Nodex
[12:36:38] <rspijker> I don’t want to change it, am actually fairly happy with it. Just didn’t understand _why_ it was happening :)
[12:36:40] <Nodex> :D
[13:08:10] <nicolas_FR> Hi there ! I'm new to mongodb. Using node.js + mongodb + monk, I can't do an update. It works from mongo console, but not from my node code. Could someone point me to a good monk doc for update (found some but can't understand it...)
[13:10:00] <Nodex> perhaps pastebin your query and someone might be able to help you
[13:12:12] <nicolas_FR> http://pastebin.com/VTYXfRdv :)
[13:12:31] <nicolas_FR> or here : db.get('indicateurs').save(req.body,{},function(err,results){
[13:14:17] <rspijker> what does happen nicolas_FR ?
[13:14:29] <Nodex> on a side note, you probably shouldn't blindly put data in your database from req.body
[13:16:18] <Nodex> https://github.com/LearnBoost/monk <--- I don't see a "save()" method
[13:16:23] <rspijker> Also, I can’t find ‘save’ anywhere in the monk code
[13:16:32] <rspijker> lol, what Nodex said...
[13:20:09] <nicolas_FR> rspijker: got 500 error
[13:20:27] <rspijker> probably because it doesn’t know save()...
[13:20:48] <nicolas_FR> rspijker: because (I think) my call for update is false.. but can't find a proper way to update _id:1 with all the JSON object passed through ajax call
[13:20:55] <Nodex> nicolas_FR : If I were you I would learn Mongodb native first, this layer (in my opinion) will hinder your knowledge as you learn as it abstracts some important basics away from you
[13:22:08] <rspijker> nicolas_FR: monk has an updatemethod. It takes a search document (_id:1 for you), and update document, $set:{fields and values you want to set here}, an options doc (not needed in this case) and your standard javascript callback mumbojumo
[13:22:35] <rspijker> https://github.com/LearnBoost/monk/blob/master/lib/collection.js
[13:22:36] <rspijker> line 141
[13:28:33] <nicolas_FR> rspijker: so I should be fine with : db.get('indicateurs').update('1',{$set:req.body},function(err,results){ no ? Fact is I still get the error. Searching in your docs
[13:34:00] <Nodex> nicolas_FR : no and it's VERY dangerous to do that
[13:34:29] <Nodex> you MUST unset the _id part of your $set statement
[13:35:53] <rspijker> nicolas_FR: yours looks more like the updateById
[13:36:03] <rspijker> that takes an id as the first argument
[13:36:16] <rspijker> you still need to remove it from the req.body though
[13:37:52] <nicolas_FR> rspijker, Nodex : so maybe do a {$set:{'field1':'value1','field2':'value2'} } ?
[13:38:10] <rspijker> that is what your set should look like, yes
[13:40:38] <nicolas_FR> damn it : db.get('indicateurs').updateById('1',{$set:{'indic1':'40'}},{},function(err,results){ still crashing :(
[13:43:44] <rspijker> this db.get …
[13:44:32] <rspijker> you say req.db, how is db in the request?
[13:45:56] <rspijker> you have no statement to connect to your db anywhere
[13:46:06] <rspijker> monk readma says something like:
[13:46:06] <rspijker> var db = require('monk')('localhost/mydb')
[13:47:16] <nicolas_FR> rspijker: I passed it from the app.js (app.use(function(req,res,next){req.db=db;next()});
[13:47:54] <rspijker> but in the context there, isn’t req something else?
[13:47:55] <nicolas_FR> rspijker: I'm discovering node.js and all, so mainly doing tutorials. My post and get on the database are actually working.
[13:48:03] <rspijker> it’s locally overwrtitten, isnt it? :/
[13:48:28] <rspijker> req is an argument to the put function there
[13:49:20] <rspijker> I’m no node.js guy unfortunately, so I’m just applying general programming knowledge here and hoping it makes some modicum of sense
[13:49:38] <nicolas_FR> rspijker: well, tutorial was saying to pass the db to all the routes files doing so. As get and post works, I was thinking my put would work the same
[13:50:28] <rspijker> maybe it does, like I said, I’m no node expert
[13:50:50] <rspijker> what do you get if you console.log(db) inside of the put before the update?
[13:51:23] <nicolas_FR> [Object Object]
[13:57:39] <rspijker> and if you toString it?
[14:02:41] <nicolas_FR> jeeeez it works :) finally : db.get('indicateurs').update({_id:1},{$set:{'indic1':'40'}},function(err,results){
[14:02:46] <nicolas_FR> Thanks for the help :)
[14:10:12] <saml> how can i find all docs whose foo field is not array?
[14:10:37] <saml> db.inventory.find( { $where : "Array.isArray(this.tags)" } )
[14:14:15] <cheeser> http://docs.mongodb.org/manual/reference/operator/query/type/
[14:15:56] <saml> docs have authoredBy and brand fields. how can I get number of docs per brand where authoredBy is not array?
[14:16:15] <saml> something happened and data is messed up. authoredBy is supposed to be array always
[14:17:45] <kerozene> hey
[14:17:50] <kerozene> join ##etc for talk about database
[14:17:50] <saml> find({authoredBy:{$type:{$ne: 4}}}).count() can't do this because $type must be number
[14:17:53] <kerozene> ##etc good
[14:17:56] <kerozene> ##etc is best
[14:17:56] <kerozene> ##etc good
[14:17:59] <kerozene> ##etc is king
[14:18:00] <saml> so must use $where
[14:18:02] <kerozene> join ##etc !!!!!!!!!!!
[14:18:04] <kerozene> ##etc is king
[14:18:06] <kerozene> ##etc good
[14:18:10] <saml> kerozene, it asks me password
[14:18:11] <Nodex> kerozene : die
[14:18:19] <saml> and my creditcard number
[14:18:31] <saml> he typed fast
[14:19:08] <Nodex> why can't you use $type?
[14:26:03] <rspijker> according to docs, you can’t use $type on arrays directly
[14:27:38] <rspijker> saml: if you are happy using $where, you can just negate that statement.. as in !Array.isArray(this.tags)
[14:27:59] <saml> i used mongodb
[14:28:04] <saml> i mean mapreduce
[14:43:44] <saml> db.articles.mapReduce(function() { if (!Array.isArray(this.authoredBy)) {emit(this.blogName,1);}}, function(key, vals) {return Array.sum(vals); }, 'tmp_authoredBy')
[14:43:47] <saml> my mapreduce is strong
[14:43:49] <saml> and webscale
[14:44:11] <saml> this doesn't tell me why data is bad though
[14:44:35] <Derick> saml: sounds like something that's easy to do with the aggregation framework
[14:44:46] <saml> Derick, could you show me?
[14:44:53] <saml> i couldn't do the $where or $match part
[14:45:10] <Derick> hmm
[14:45:20] <saml> to see if authoredBy is not array
[14:45:30] <Derick> one sec
[14:46:34] <Derick> you're right - I thought that there was an isType operator
[14:47:06] <saml> ah thanks
[14:47:10] <saml> mapreduce ftw
[14:47:13] <rspijker> there is a $type, but it can’t do negation, I think..
[14:47:17] <saml> except that it writes to collection
[14:47:20] <Derick> heh, just add it yourself
[14:47:29] <Derick> adding this would be trivial I think
[14:47:43] <saml> adding an aggregation operator?
[14:47:50] <saml> i didn't know i could just add custom operator
[14:48:21] <Derick> you need to know c++ and recompile :-)
[14:53:41] <rspijker> I like how people’s definitions of trivial can be so far apart :)
[14:54:33] <Derick> true
[14:55:07] <cheeser> trivial == my working solution. complex == your solution i have to learn.
[14:56:10] <Faisal_> Hello guys, I want to use Django with mongo db. can you guys suggest a future proof solution to do so ? I am using http://django-mongodb-engine.readthedocs.org/en/latest/index.html
[14:56:26] <Faisal_> how ever its dependent django-non-rel. so some of my collegues suggested not to go for this as django-non-rel is deprecated
[15:20:37] <B166IR> hi
[15:27:30] <Nodex> 164.3
[15:34:27] <jonyfive> hi all. i'm running mongodb 2.6.1 and i am trying to go through the "Little MongoDB Book" examples, but when i run this command: db.unicorns.find({_id: ObjectId("TheObjectId")})
[15:34:37] <jonyfive> i get: 2014-07-22T23:21:09.965-0600 Error: invalid object id: length
[15:34:49] <jonyfive> any ideas what might be wrong?
[15:38:37] <kali> jonyfive: objectids are 128 bit integer. ObjectId() construct such a 128 bit integer from a 32 character hexadecimel string
[15:39:08] <jonyfive> ahh so i was supposed to replace "TheObjectId" with that 32 char hex str?
[15:40:15] <jonyfive> kali: ahh yes, that does work now, thanks!
[15:40:25] <kali> jonyfive: or just use string _id. you can use anything as your _id, it does not have to be an ObjectId
[15:50:47] <rspijker> kali: isn;t it 24 character hex string? :/
[15:53:43] <B166IR> jup and that makes them ah 36 bit id
[15:53:59] <B166IR> sry 96 bit
[15:54:03] <garbados> hello! i’m having issues getting changes to show up in the oplog. anyone have experience working with the oplog?
[15:54:18] <rspijker> B166IR: in binary, yeh
[15:54:29] <kali> ho, right. 96 bits, 24 positions.
[15:54:41] <kali> it did the trick, though :)
[15:55:03] <B166IR> yea i was just checking -> http://docs.mongodb.org/manual/reference/object-id/
[15:55:22] <B166IR> 12 byte = 12*8 bits = 96
[16:01:12] <rspijker> garbados: sure
[16:01:20] <garbados> rspijker: yay!
[16:01:24] <rspijker> can you be a little more specific?
[16:01:49] <garbados> yes. i’m using https://github.com/TorchlightSoftware/mongo-watch to watch the oplog for changes
[16:02:05] <garbados> it logs connection and dropCollection events but no updates, inserts, etc
[16:02:34] <garbados> rspijker: ^
[16:03:19] <rpetre> hello, i'm working on an archival script that's supposed to do a mongoexport for a date interval then remove() the same selection. anyone done this? is the return code of mongoexport a reliable indication that it got all data so i can delete it?
[16:03:54] <rspijker> garbados: and if you just check the oplog by hand? Do you see the updates?
[16:04:04] <rspijker> I am presuming you do have a replica set setup btw?
[16:04:27] <garbados> rspijker: i do, or i think i do. started mongod with `sudo mongod --replSet someArbitraryName`
[16:04:39] <garbados> rspijker: i’m not sure how to check the oplog by hand
[16:05:41] <rspijker> log in to mongo (just type mongo from command line) and do: use local <return> db.oplog.rs.find() <return>
[16:06:03] <rspijker> on the same host you’re running mongod from, obviously
[16:06:07] <rspijker> garbados: ^^
[16:06:38] <garbados> well jeez there are the changes
[16:06:57] <garbados> so why isn’t mongowatch picking them up >:(
[16:07:13] <garbados> emoji not directed at you, just tired and confused
[16:08:44] <rspijker> I don’t know about mongowatch. But apparently you have to watch a collection explicitly
[16:08:46] <rspijker> are you doing that?
[16:10:26] <garbados> oh, hmm no i’m not
[16:12:48] <s2013> how would i import a 20gig json file into mongodb? i believe it was exported out of mongodb
[16:13:40] <kali> s2013: mongoimport
[16:13:49] <s2013> i tried it but it says json too large
[16:13:52] <garbados> rspijker: yep, that makes the difference
[16:14:00] <rspijker> cool :)
[16:14:07] <garbados> rspijker: thanks so much for your help :D
[16:14:48] <rspijker> glad I could help
[16:16:37] <kali> s2013: if your file does assume the expected format (one document per file) you can use "split" to split it into smaller pieces and try one chunk at a time
[16:16:49] <kali> s2013: sorry, one document per line, not one document per file
[16:22:47] <s2013> kali, Wed Jul 23 16:16:07.796 exception:JSONArray file too large
[16:22:48] <s2013> thats what i get
[16:23:06] <s2013> i typed mongoimport -d dbname -c collectionname --file filename.json --jsonArray
[16:23:26] <s2013> this wasnt even the 20gig json. this was like 50mb or something
[16:25:17] <rspijker> well. maximum document size is 16MB, so it might just be that...
[16:26:32] <rspijker> do you have documents on a single line?
[16:26:41] <rspijker> as in, is each document on exactly 1 line?
[16:26:58] <s2013> no
[16:28:18] <s2013> this was exported from a hosted platform (parse)
[16:29:03] <rspijker> I’m not sure about the default format mongoimport accepts without jsonArray...
[16:29:31] <s2013> so how would i import it? im so confused
[16:29:38] <s2013> i did use the jsonarray flag
[16:30:10] <rspijker> is there a jsonArray in the file you are importing?
[16:30:19] <rspijker> as in, does it start with [ and end with ] ?
[16:30:54] <s2013> let me check
[16:32:26] <s2013> hmm i think its cause there is a gigantic { "results": } wrapped around the array
[16:32:32] <s2013> i wonder if i delete that if that wil help
[16:32:56] <rspijker> that’s probably worth a shot...
[16:33:08] <rspijker> it might just be parsing it as one huge document instead of an array of documents
[16:33:14] <s2013> yeah probably
[16:33:40] <s2013> thing is that all of this is on a server.. i guess i gotta use nano or something to edit it?
[16:33:49] <rspijker> nano is awesome man
[16:35:59] <s2013> how would i edit a 20 gig file tho :\
[16:36:22] <rspijker> not with nano :P
[16:42:00] <s2013> hmm still the same error
[16:42:02] <s2013> its driving me nuts
[17:17:04] <talbott> hello mongers
[17:17:18] <talbott> when creating a new mongo connection in javascript
[17:17:23] <talbott> archiveConnection = new Mongo("localhost:27017");
[17:17:31] <talbott> do you know if there's a way to pass the username/password into this
[17:18:29] <disappearedng> Is it better to have 2 Entities: WaterPoints, SoilPoints, rather than 1 (Point, type:SOIL/WATER) if they all have different attributes?
[17:18:33] <zapparappa> Mongo n00b and want to make sure I understand querying. In a database of movies, this would return Adam Sandler films from 2001 and up, right?
[17:18:43] <zapparappa> db.movies.find( { name: "Adam Sandler", year: { $gt: 2000 } } )
[17:18:44] <zapparappa> Thanks.
[18:00:42] <shinka> Using NodeJs and Mongoose, I'm struggling to exclude fields with mongoose and an "or" , since I'm not getting any error I have no idea what I'm doing wrong. My query is return this.find().or(...).exec(cb) (of course with something more meaningful in place of '...'), and it works. To exclude _id and __v from all the results in the array I tried: return this.find().or(..., { _id: 0, __v: 0 }).exec(cb);, but it continues to return an array of objects with _id a
[18:00:42] <shinka> nd __v.
[18:22:20] <Aster> How would I get an array from a document and sort its results?
[18:22:46] <Aster> I basically want the array in the document to have an auto-incremented ID
[18:24:12] <cheeser> http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
[19:24:36] <Gargoyle> Can any of the PHP driver peeps elaborate on why Mongo is not good for session storage?
[19:43:07] <sflint> How do I compute the hash algorithm that is used in Mongo?
[19:43:11] <sflint> need to recreate it
[19:43:13] <sflint> anyone?
[20:13:15] <nycdjangodev> hey guys. I am very new to mongodb. How to I change my authorization level to allow me to remove entries? Current error: pymongo.errors.OperationFailure: unauthorized for db:<dbname_redacted> level: 2
[20:43:07] <stangeland> Hi, i am looking in the mongodb documentation, and i see this line: "Considerations: For production deployments, always run MongoDB on 64-bit systems. You cannot install this package concurrently with the mongodb, mongodb-server, or mongodb-clients packages provided by Ubuntu." Does that mean that mongodb does not work on ubuntu systems?
[20:43:31] <Gargoyle> stangeland: Nope.
[20:43:51] <Gargoyle> stangeland: MongoDB runs fine on Ubuntu. Where did you read that?
[20:44:01] <stangeland> Gargoyle, here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[20:45:20] <Gargoyle> stangeland: What that means is that you cannot install the mongo packages that come with the ubuntu distro and the ones direct from mongodb.org at the same time.
[20:46:06] <stangeland> Gargoyle, ahhh ok, so i shoiuld use the ones from mongodb.org
[20:46:14] <Gargoyle> stangeland: You are best of following those instructions and using the mongodb.org packages anyway. The ubuntu distro ones are always out of date.
[20:46:17] <Gargoyle> Yep! :-)
[20:46:43] <stangeland> Gargoyle, i read that mongodb stores files in the database without any overhead, is that correct?
[20:47:23] <Gargoyle> stangeland: There's always overhead! MongoDB uses memory mapped files
[20:48:13] <Gargoyle> Depending on your requirements, you can choose speed over data integrity.
[20:48:22] <Gargoyle> or vice versa
[20:48:25] <stangeland> Gargoyle, ahh ok. I have a big-data kinda system with terabyte size hdf files full of data. How do you think that would be a good idea to couple with mongodb?
[20:49:21] <Gargoyle> MongoDB and hadoop don't cover the same use cases.
[20:49:51] <stangeland> ok, i am not using hadoop.
[20:50:04] <stangeland> im using hdf files which has nothing to do with hadoop
[20:50:25] <Gargoyle> To get the best performance from Mongo, you want your "working set" to fit in memory. So I guess that depends on what portion of the 1TB is "active".
[20:50:36] <stangeland> hadoops filesystem is called HDFS
[20:50:42] <Gargoyle> sorry, my mistake.
[20:51:16] <stangeland> right ok, so i would save paths to the hdf files with all the raw data in the mongodb ?
[20:52:16] <Gargoyle> Yeah, you could do. Just keep the meta data in Mongo for searching?
[20:52:42] <stangeland> yeah ok, nice
[20:54:39] <stangeland> does mongodb play well in multiprocess distributed environments where several processes work on the same data?
[20:55:08] <Gargoyle> Probably not.
[20:56:21] <Gargoyle> stangeland: http://docs.mongodb.org/manual/faq/concurrency/
[23:37:37] <huleo> hi
[23:38:06] <huleo> inside document, array of nested documents...how can I update only specific nested document in this array?
[23:47:44] <joannac> $elemMatch
[23:48:40] <joannac> huleo: ^^
[23:49:29] <huleo> one sec, let me look into it
[23:52:35] <huleo> yeah, it looks like it...let me try