PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 16th of September, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:49:44] <beekin> I find it odd that $pop doesn't actually return a value...I assume this is intentional?
[01:54:32] <Boomtime> beekin: yes, $pop is used in an update operation so you should know what you're changing already - you can use the match criteria to assert that pop will do what you expect
[01:54:53] <beekin> Boomtime: That's brilliant logic.
[01:55:07] <beekin> Thanks :)
[01:56:00] <beekin> Boomtime: Granted, I'm not sure what you're referring to with $match. That's aggregation, yeah?
[01:56:33] <Boomtime> no, the match criteria provided in the update - the predicates needed to match a document for $pop to apply to
[01:56:59] <beekin> Oh, gotcha. I always referred to that as the query parameter.
[01:57:03] <Boomtime> you must have already provided enough information to match one or more documents - if you need to know what is about to change, then include the criteria in the match
[01:58:07] <Boomtime> note also i think that $pop is compatible with multi so it is totally possible to 'pop' different values from different matching documents - how on earth you can utilize that i don't know though
[01:59:32] <beekin> I suppose that'd work with a queue of sorts? If you wanted to remove the last element of every document with whatever criteria.
[02:06:10] <darius93> what is the size that the database can store?
[02:06:35] <cheeser> petabytes
[02:06:59] <beekin> All the novels in the world.
[02:07:28] <darius93> i only ask because i notice there was some limits on the size of collections (any large data to use gridfs) but nothing saying about the database itself
[02:08:08] <cheeser> no size limits on collections
[02:08:36] <darius93> okay cool
[02:08:55] <beekin> You're probably referring to the 16mb restriction on documents.
[02:09:03] <asteele> does upgrading from 2.6 -> 3 show immediate noticable speed improvements? And is it tough?
[02:09:06] <cheeser> a single document can only be 16MB but that's enormous in practice
[02:09:28] <cheeser> the biggest boost you'll get is moving from mmapv1 to wiredtiger
[02:09:44] <asteele> I am using node/mongoose and have certain endpoints that slowly take longer and longer to return on production, with a few thousand sessions it takes about 12 hours to go from 100 ms average to 1000ms average, and new relic shows most of the time coming inside of one of my mongoose inserts, being called an average of 7 times per calll
[02:09:48] <darius93> i dont expect the documents to be that much
[02:10:31] <morenoh149> asteele: that may be a mongoose problem
[02:11:11] <cheeser> i expect it'sa problem in the app
[02:11:23] <darius93> asteele, you should take it to #node.js channel
[02:11:33] <asteele> yeah i realize there are so many factors its hard to tell but i have heard lots of warnings about mongoose being slow, but since the response is really fast on server reboot, but then slowly climbs, it leads me to think its some kind of like, im almost positive its not mongoose thats the problem there
[02:11:48] <asteele> some kind of leak*
[02:11:53] <asteele> sorry
[02:11:55] <asteele> not mongo ***
[02:12:13] <asteele> ok ill ask in there
[02:12:14] <darius93> asteele, if you want, try waterline. When i used node.js + mongo, i found waterline working very well.
[02:12:40] <darius93> i cant promise much since i dont use node anymore but it worked well for me
[02:28:01] <StephenLynx> just use the native driver asteele
[02:30:40] <asteele> StephenLynx but i already have so much work in mongoose ;p but it is not out of the question at all. If mongoose is the problem I will remove it, but there are still many other factors. Mongoose is much slower, but my problem is like a leak that continues on that makes the problem way worse i think
[02:30:56] <StephenLynx> mongoose
[02:31:05] <StephenLynx> thats mongoose
[02:50:25] <asteele> lol possibly :o working on getting some heapdumps set up now so i can have some more information, people in node are saying it being be related to socket io so, i just have too much going on to really guess from here
[03:01:42] <StephenLynx> yeah, socket.io is crap too
[03:01:56] <StephenLynx> any kind of framework that abstracts nothing is crap, IMO
[03:04:08] <preaction> what's the point of a framework that doesn't abstract anything? can that even be called a framework?
[03:04:29] <StephenLynx> thats what mongoose, express, socket.io and others are.
[03:04:40] <StephenLynx> they abstract something that is already abstracted.
[03:04:44] <StephenLynx> its completely redundant.
[04:11:20] <acidjazz> thocket.io
[07:44:16] <rakkaus_> Hi guys! I need help with a mongo 3.0.5 and REST I'm trying to use curl -s --digest admin:pass@data.dev.com:28017/serverStatus?text=1
[07:44:38] <rakkaus_> but it says me "not allowed"
[07:44:51] <rakkaus_> it was ok with 2.6.x mongodb
[07:45:24] <joannac> rakkaus_: http://docs.mongodb.org/ecosystem/tools/http-interfaces/#http-status-interface-security
[07:45:37] <rakkaus_> what was changed ? and how to fix that? in my config rest = true httpinterface=true
[07:46:51] <rakkaus_> but I've configured security it's require credentials
[07:46:56] <rakkaus_> but still not work
[07:50:37] <rakkaus_> ?
[07:54:26] <rakkaus_> Simple REST API The mongod process includes a simple REST API as a convenience. With no support for insert, update, or remove operations, it is generally used for monitoring, alert scripts, and administrative tasks.
[07:54:38] <rakkaus_> how to use that API
[07:54:42] <rakkaus_> in fact I need only monitoring
[07:55:07] <rakkaus_> is that simple API still works with 3.0.x ?
[08:00:30] <rakkaus_> it also say me in log
[08:00:31] <rakkaus_> 2015-09-16T08:53:10.307+0100 I NETWORK [websvr] admin web console waiting for connections on port 28017
[08:00:54] <rakkaus_> that admin console is up but still can't connect on that port
[08:07:17] <rakkaus_> ok so, is there any secure way to get info about uptime of mongodb server remotely?
[08:21:31] <sorribas> rakkaus_: would it help to use the ping command? http://docs.mongodb.org/manual/reference/command/ping/
[09:05:49] <rakkaus_> sorribas thx! ping should work, in fact I need to check is it alive or not before launch my app
[10:10:27] <napnap> Hi all, I'm new to MongoDb and I try to reach my mongodb server with a client installed on other computer. The server run on debian, I just set the good bind ip in the config file but apparently is not enough.
[10:11:52] <napnap> On server : the command "mongo" connect successfully to the server. On other computer the command "mongo --host serverip" failed to connect. On the server side I see the server listen to the good interface...with the default port..
[10:12:08] <napnap> No info in log...
[10:20:45] <napnap> (it seems to work with only one ip in bind_ip and not multiples ones separated by comma)...
[10:24:05] <mitereiter> hello
[10:24:11] <mitereiter> shouldn't this be a covered query?
[10:24:11] <mitereiter> db.users.find({"age" : {"$gte" : 21}},{"_id":0, "username":1, "age":1}).sort({"username":1})
[10:24:11] <mitereiter> if i got a username_1_age_1 index?
[10:50:26] <joannac> napnap: what's the output of db.serverCmdLineOpts() ?
[10:56:07] <mitereiter> "argv" : [
[10:56:07] <mitereiter> "mongod.exe",
[10:56:07] <mitereiter> "--smallfiles",
[10:56:07] <mitereiter> "--noprealloc"
[10:56:07] <mitereiter> ],
[10:56:08] <mitereiter> "parsed" : {
[10:56:08] <mitereiter> "storage" : {
[10:56:09] <mitereiter> "mmapv1" : {
[10:56:09] <mitereiter> "preallocDataFiles" : false,
[10:56:10] <mitereiter> "smallFiles" : true
[10:56:10] <mitereiter> }
[10:56:11] <mitereiter> }
[10:56:18] <Zelest> heh
[10:56:41] <mitereiter> its an answer to @joannac
[10:56:57] <joannac> mitereiter: ? I didn't ask you anything
[10:57:33] <joannac> my question was to napnap
[10:57:49] <mitereiter> sorry, i tought it was for me
[11:02:21] <mitereiter> shouldn't this be a covered query?
[11:02:21] <mitereiter> db.users.find({"age" : {"$gte" : 21}},{"_id":0, "username":1, "age":1}).sort({"username":1})
[11:02:22] <mitereiter> if i got a username_1_age_1 index?
[11:23:56] <joannac> mitereiter: can you pastebin the explain(true) output?
[11:27:26] <waheedi> hey there, whats the best way to update document IDs
[11:27:39] <joannac> ...what?
[11:27:47] <waheedi> documents*
[11:27:53] <waheedi> alright
[11:27:55] <joannac> with .update()?
[11:28:11] <waheedi> ok maybe the question is not well written
[11:28:20] <Zelest> dump to bson.. cat, sed, awk.. import!
[11:28:24] <Zelest> because .update() is too easy :P
[11:31:06] <waheedi> alright gents and ladies, I have 120 million docs in one collection and each of these docs belong to another collection (50 documents)
[11:31:42] <waheedi> so these 50 documents IDs are inside each of the 120 million docs
[11:33:12] <waheedi> the 50 docs are becoming 45 and the 5 removed docs here already have their IDS inside the 120M
[11:33:34] <waheedi> im going to replace each of these removed IDS with anothe ID
[11:33:42] <waheedi> another ID.
[11:34:18] <waheedi> I’m on 2.4 and its a bit hard for us to upgrade to 2.6 as for now. otherwise I would have used Bulk updates gurus
[11:35:11] <waheedi> if I’m still not making sense let me know :)
[11:38:36] <waheedi> joannac?
[11:44:40] <mitereiter> sorry for the delay joannac, http://pastebin.com/HBvMtaN8
[11:48:05] <synthmeat> is $push guaranteed to leave array in order? so, literal "push" in terms of array order
[11:54:34] <napnap> joannac, the output (truncated) is : --config /etc/mongodb.conf. This is the file that I edited.
[11:56:24] <napnap> joannac, in ip_bind property if I set with : 127.0.0.1,192.168.1.1 the server listen on 255.255.255.255 (netstat output) . if I set ip_bind with 192.168.1.1 only it works.
[11:59:03] <joannac> waheedi: I still don't understand the question
[12:00:01] <joannac> mitereiter: weird. looks like a bug.
[12:00:40] <joannac> napnap: when I ask you for the output, I want the full output. Pastebin it please
[12:06:58] <waheedi> sorry joannac im not a good explainer I will go back to my cave, there I will find my asnwer :)
[12:08:25] <napnap> joannac, http://pastebin.fr/41168
[12:09:41] <joannac> napnap: db.version()?
[12:09:57] <joannac> where's the actual parsed options? :(
[12:10:08] <napnap> joannac, 1.4.4
[12:10:12] <joannac> ...
[12:10:19] <joannac> are you serious?
[12:10:51] <napnap> joannac, hum :-|
[12:11:21] <napnap> joannac, ok ok. I understand :-). I'm a bit old school.
[12:12:03] <joannac> 1.4.4 is like 8 major versions behind
[12:12:17] <joannac> I'm out. I have no idea how that version behaves
[12:12:42] <napnap> joannac, no pbl. I will deal with that until upgrade to a new server.
[12:13:31] <joannac> mitereiter: I'm filing a bug report
[12:18:00] <joannac> deathanchor: you are :)
[12:20:55] <cheeser> super old, in fact. :)
[12:22:19] <deathanchor> yeah I know, last I went cutting edge I lost a limb.
[12:35:34] <gcfhvjbkn> http://pastie.org/private/sas6lshfdfeegphtt4useg
[12:35:42] <gcfhvjbkn> getting messages like this in mongos log
[12:35:57] <gcfhvjbkn> i'm not 100% sure, but it seems like the data is just dropped
[12:36:10] <gcfhvjbkn> any chance i can get mongos/mongod to fail hard in these cases?
[12:36:25] <gcfhvjbkn> like, print out an exception and just die
[12:37:12] <gcfhvjbkn> ie turn this byzantine failure into a regular failure
[12:40:53] <joannac> gcfhvjbkn: looks like a tagging problem?
[12:44:07] <mitereiter> thanks joannac
[12:46:44] <gcfhvjbkn> joannac: it don't know about that, probably; it worked until some point, then this happened, so i am a little disappointed that mongo acts like this
[12:47:00] <gcfhvjbkn> somehow 3 our of 5 of my shards disappeared from sh.status()
[12:47:13] <gcfhvjbkn> *out of
[12:47:57] <deathanchor> it all looks like magic or curses until you understand how it works.
[12:48:37] <joannac> gcfhvjbkn: I find this all very suspicious. What happened when the 3 shards disappeared?
[12:48:52] <joannac> maintenance? upgrade? db admin poking around?
[12:48:59] <gcfhvjbkn> nothing at all
[12:49:51] <gcfhvjbkn> http://pastie.org/private/1hmladxpczbs1sikje2pg
[12:49:56] <gcfhvjbkn> my sh.status() output
[12:50:12] <joannac> what were the other shards?
[12:51:12] <gcfhvjbkn> what do you mean?
[12:51:29] <joannac> you said 3 of them disappeared? I want more details on the 3 that disappeared
[12:51:47] <joannac> what were their names? were they tagged? are they still up and running?
[12:52:14] <gcfhvjbkn> ah, ok
[12:52:15] <gcfhvjbkn> rsTwente, rsEmden, rsOstende
[12:52:37] <gcfhvjbkn> they were tagged according to their names s/rs/tag
[12:52:41] <gcfhvjbkn> just like the other two
[12:52:46] <gcfhvjbkn> they are still running
[12:52:55] <gcfhvjbkn> i'll try to look for something suspicious in the logs
[12:54:42] <joannac> gcfhvjbkn: those shards aen't tagged...
[12:56:31] <gcfhvjbkn> let me check; how do you know that?
[12:56:41] <joannac> there's no tags in the shards section?
[13:00:22] <gcfhvjbkn> no tags, but there aren't some of the shards as well, so… but yeah that's plausible; so the way i see it now, it didn't ever work in the first place: even though the data was written to local mongos on all 5 servers, there was no tagging info, so all the chunks went to the same arbitrary chosen server
[13:00:26] <gcfhvjbkn> is this correct?
[13:00:45] <gcfhvjbkn> now i wonder what happened now that it refuses to write data anymore
[13:01:26] <gcfhvjbkn> this, and why just 2 shards out of 5
[13:01:46] <joannac> gcfhvjbkn: correct, all the chunks are on a single shard
[13:03:26] <gcfhvjbkn> they were supposed to be tagged anyway, i've got no idea what happened; i really should rerun the whole thing and see if it breaks in the same fashion later
[13:03:31] <gcfhvjbkn> thanks for the guidance i guess
[14:43:59] <beekin> Just noticed that collection.aggregate will still work even if the stages aren't in an array?
[14:45:48] <cheeser> i don't think so ...
[14:45:59] <deathanchor> no way
[14:46:34] <beekin> It's quite literally work
[14:46:35] <beekin> *working
[14:46:44] <cheeser> oh, yeah? i thought it'd complain about that.
[14:46:55] <cheeser> maybe that was eased in 3.0
[14:47:01] <beekin> Me too :D
[14:47:46] <beekin> Hmm, I'll check it out
[14:47:57] <cheeser> yeah. it pushes the var args in to an array
[14:48:07] <cheeser> in the shell do: db.collection.aggregate
[14:48:12] <cheeser> (no parens)
[14:48:23] <cheeser> you'll the js to bundle all the params in to an array and pass them along.
[14:48:23] <beekin> Nice
[14:48:34] <cheeser> you'll *see*
[14:49:48] <beekin> No changelog detailes :(
[14:52:12] <beekin> Ah ha! It'll break if you add any options.
[14:55:18] <saml> db.coll.aggregate(stg1, stg2, stg3) works but i don't think you can pass options that way
[14:55:29] <saml> db.coll.aggregate([stg1, stg2, stg3], options)
[14:55:51] <saml> oh you found out already
[14:59:11] <saml> let's say i have different kinds of events. so, an event is {type:EVENT_TYPE, ts: TIMESTAMP}. need to make queries like give me 10 most frequent event of type1 between these timestamp1 and t2
[14:59:33] <saml> problem is between t1 and t2, there are 25mil documents and aggregate is real slow
[15:00:21] <saml> what kind of database is capable of making fast aggregations of different parameters?
[15:01:20] <cheeser> what's your pipeline look like? are you indexing properly? what does explain on your pipeline say?
[15:02:32] <saml> db.events.aggregate([{$match:{ts: {$gt: ...}}}, {$group:{_id:'$type', c:{$sum:1}}}, {$sort:{c:-1}}])
[15:02:57] <cheeser> and you ran explain on that?
[15:03:27] <saml> no i'm imagining. data is in mysql now and group by is taking long :P
[15:03:46] <cheeser> so ... you haven't run this on mongo yet?
[15:04:19] <saml> no
[15:05:19] <cheeser> oh, well, i'm not terribly concerned about mysql performance. everyone "knows" mysql sucks. ;)
[15:25:01] <saml> https://www.mongodb.com/ looks like download link is down
[15:25:17] <saml> https://www.mongodb.com/lp/download/mongodb-enterprise it goes to here without download links
[15:25:37] <cheeser> wfm
[15:25:40] <saml> oh mongodb.org
[15:25:54] <cheeser> yeah
[16:44:57] <bobbywilson0> Anyone know why I might be seeing this error on a collection that doesn't have any documents? "Failed to create index {:name=>"_id_1", :ns=>"z_development.messages", :key=>{"_id"=>1}, :unique=>false} with the following error: 67: exception: _id index cannot be non-unique (Mongo::OperationFailure)"
[16:46:30] <kali> bobbywilson0: you automatically get an index on _id, and it is a "unique" index. mongodb just prevent you to create an index that would conflict with the automatic one
[16:48:06] <bobbywilson0> kali: ah thank you, I wonder why mongoid is trying to create a default one
[16:48:38] <kali> bobbywilson0: i don't know
[16:48:49] <bobbywilson0> kali: no worries, thanks
[16:48:53] <kali> i've not been using mongoid for a while, i don't remember that behaviour
[18:01:36] <dddh> "
[18:01:39] <dddh> From Development to Production with Docker and MongoDB
[18:01:41] <dddh> From Development to Production with Docker and MongoDB
[18:01:44] <dddh> From Development to Production with Docker and MongoDB
[18:01:47] <dddh> "
[18:01:49] <dddh> ;)
[18:01:52] <dddh> *oops
[18:23:40] <StephenLynx> >docker
[18:26:13] <yopp> shmoker
[18:50:44] <deathanchor> how do I query for null value?
[18:51:04] <deathanchor> find({ this : null }) not working for me.
[18:51:43] <deathanchor> nevermind
[18:51:45] <deathanchor> i'm dumb
[18:57:53] <deathanchor> searching for not there when the field name is wrong is like find({})
[19:16:05] <morenoh149> how does an index affect insertion speed?
[19:17:16] <cheeser> if there are /n/ indexes, each insert requires /n/ updates to the indexes
[19:19:31] <deathanchor> morenoh149: also if there are any uniques, it has to check uniqueness
[19:30:39] <morenoh149> is there anything obviously wrong (or slow) about this query
[19:30:41] <morenoh149> db.collection.find({client_id: 1234, last_access_time: {$gte: ISODate(“2015-01-01T05:00:00Z”)}}).sort({_id: -1})
[19:32:34] <deathanchor> morenoh149: do a .explain() on that
[19:32:40] <deathanchor> post into a gist
[19:32:59] <kali> morenoh149: it's a tricky one to perform, because even if you have the right index on client_id and last_access_time, the sort bit will have to be done by actually sorting the results
[19:34:06] <kali> so depending on the actual cardinalities, it may be more efficient with an index [client_id, last_access_time] or [client_id,_id]
[19:34:16] <kali> and you may need to hint the optimizer
[19:34:37] <kali> a getIndexes on the collection will help (along with the explain())
[19:35:49] <morenoh149> kali: where can I read about these cardinality/performance considerations?
[19:36:42] <morenoh149> these are the indexes I have so far _id_ , client_id_1_group_1_last_access_time_-1 , group_1_client_id_1 , client_id_1_last_access_time_1
[19:37:44] <kali> morenoh149: http://docs.mongodb.org/master/tutorial/create-queries-that-ensure-selectivity/
[19:38:57] <kali> you need to check with explain() which one it picks (client_id_1_last_access_time_1 would help with selectivity, but the sort on _id has to be performed for real)
[19:41:29] <morenoh149> kali: ah I see. No index would help with the sort operation.
[19:42:28] <morenoh149> reading http://docs.mongodb.org/manual/tutorial/sort-results-with-indexes/
[19:42:59] <kali> morenoh149: sure. but that is not compatible with the last_access_time range selector
[19:43:53] <kali> morenoh149: on the other hand, with such an index (client_id, _id), the optimizer (or a manual hint) may try to scan the index and skim out the result not matching the last_access_time range
[19:44:41] <kali> morenoh149: and it may even be better with (client_id, _id, last_access_time) because mongodb could select the right record just scanning the index, with no need to look at the actual documents
[19:44:59] <kali> morenoh149: you'll have to poke around and see which one of these work better with your data
[19:46:04] <morenoh149> since I have your ear. Wouldn't I just make an index client_id,last_access_time,_id_-1 ?
[19:47:46] <kali> morenoh149: you can try that one too. but the thing is, the document you want to select will come in the order of last_access_time, so mongodb will have to try them by _id
[19:48:24] <kali> the win with this one is that mongodb will not have to physically access the various document to read their _id, as it will be present in the index
[19:48:47] <kali> so it may be worth a try too :)
[19:51:13] <morenoh149> after reading that last link I shared. It seems to say the index can be use to sort if the sort keys are present in the query keys. So like .find({client_id: 1234, last_access_time: foo, _id: blah}}).sort(_id: -1}) would use the index efficiently
[19:51:37] <morenoh149> disregarding for a moment how querying for a specific _id is silly for now
[19:52:03] <kali> yes. but the range on last_access_time breaks this
[19:52:37] <kali> you have to picture the index as a sorted list of the fields you've chosen
[19:53:52] <kali> client_id, last_access_time makes client_id:1234, last_access_time: $gt:... with no sort quite easy. just go to the right place in the index and start to read
[19:55:02] <kali> client_id, last_access_time, _id will work well for client_id:1234, last_access_time: some_time sorted by _id: go to the right place in the index and start to read
[19:55:46] <kali> but when you combine the range query with the sort, it no longer works
[20:05:24] <Torkable> I want to do two $addToSet in one query
[20:05:29] <Torkable> impossible?
[20:05:55] <kali> Torkable: have you tried it ?
[20:06:08] <Torkable> no
[20:06:14] <kali> well, you should :)
[20:06:16] <Torkable> but its invalid json to have duplicate keys
[20:06:20] <Torkable> k ill try lol
[20:06:45] <kali> { $addToSet : { set_a: key_a, set_b: key_b } }
[20:06:58] <Torkable> ah
[20:07:03] <Torkable> i see, thanks
[20:08:27] <kali> i'm not actually sure either json and bson are explicitely forbidding dup keys, actually, but that's another story
[20:09:09] <cheeser> json definitely not. pretty sure bson doesn't care either.
[20:09:39] <kali> ... which does not make it necessarily a good ide
[20:09:40] <kali> a
[22:58:27] <moqca> I'm following the tutorial on: http://docs.mongodb.org/ecosystem/tutorial/write-a-tumblelog-application-with-flask-mongoengine/ But whenever I get to the logging with shell part I get a ServerSelectionTimeoutError, error
[22:58:31] <moqca> What am I doing wrong?
[23:00:10] <joannac> either a connection problem, or you didn't intialise your replica set
[23:01:09] <moqca> I'm leaning towards the connection problem. I'm following the steps on the site, and it doesn;t mention anything about initializing
[23:46:36] <harttho> Does the nodejs mongodb driver (using latest version) support nodejs 4.0?
[23:47:34] <harttho> I see this error after trying to rebuild it
[23:47:38] <harttho> https://www.irccloud.com/pastebin/fmIjGx2i/