[01:49:44] <beekin> I find it odd that $pop doesn't actually return a value...I assume this is intentional?
[01:54:32] <Boomtime> beekin: yes, $pop is used in an update operation so you should know what you're changing already - you can use the match criteria to assert that pop will do what you expect
[01:56:00] <beekin> Boomtime: Granted, I'm not sure what you're referring to with $match. That's aggregation, yeah?
[01:56:33] <Boomtime> no, the match criteria provided in the update - the predicates needed to match a document for $pop to apply to
[01:56:59] <beekin> Oh, gotcha. I always referred to that as the query parameter.
[01:57:03] <Boomtime> you must have already provided enough information to match one or more documents - if you need to know what is about to change, then include the criteria in the match
[01:58:07] <Boomtime> note also i think that $pop is compatible with multi so it is totally possible to 'pop' different values from different matching documents - how on earth you can utilize that i don't know though
[01:59:32] <beekin> I suppose that'd work with a queue of sorts? If you wanted to remove the last element of every document with whatever criteria.
[02:06:10] <darius93> what is the size that the database can store?
[02:07:28] <darius93> i only ask because i notice there was some limits on the size of collections (any large data to use gridfs) but nothing saying about the database itself
[02:08:08] <cheeser> no size limits on collections
[02:08:55] <beekin> You're probably referring to the 16mb restriction on documents.
[02:09:03] <asteele> does upgrading from 2.6 -> 3 show immediate noticable speed improvements? And is it tough?
[02:09:06] <cheeser> a single document can only be 16MB but that's enormous in practice
[02:09:28] <cheeser> the biggest boost you'll get is moving from mmapv1 to wiredtiger
[02:09:44] <asteele> I am using node/mongoose and have certain endpoints that slowly take longer and longer to return on production, with a few thousand sessions it takes about 12 hours to go from 100 ms average to 1000ms average, and new relic shows most of the time coming inside of one of my mongoose inserts, being called an average of 7 times per calll
[02:09:48] <darius93> i dont expect the documents to be that much
[02:10:31] <morenoh149> asteele: that may be a mongoose problem
[02:11:11] <cheeser> i expect it'sa problem in the app
[02:11:23] <darius93> asteele, you should take it to #node.js channel
[02:11:33] <asteele> yeah i realize there are so many factors its hard to tell but i have heard lots of warnings about mongoose being slow, but since the response is really fast on server reboot, but then slowly climbs, it leads me to think its some kind of like, im almost positive its not mongoose thats the problem there
[02:12:14] <darius93> asteele, if you want, try waterline. When i used node.js + mongo, i found waterline working very well.
[02:12:40] <darius93> i cant promise much since i dont use node anymore but it worked well for me
[02:28:01] <StephenLynx> just use the native driver asteele
[02:30:40] <asteele> StephenLynx but i already have so much work in mongoose ;p but it is not out of the question at all. If mongoose is the problem I will remove it, but there are still many other factors. Mongoose is much slower, but my problem is like a leak that continues on that makes the problem way worse i think
[02:50:25] <asteele> lol possibly :o working on getting some heapdumps set up now so i can have some more information, people in node are saying it being be related to socket io so, i just have too much going on to really guess from here
[03:01:42] <StephenLynx> yeah, socket.io is crap too
[03:01:56] <StephenLynx> any kind of framework that abstracts nothing is crap, IMO
[03:04:08] <preaction> what's the point of a framework that doesn't abstract anything? can that even be called a framework?
[03:04:29] <StephenLynx> thats what mongoose, express, socket.io and others are.
[03:04:40] <StephenLynx> they abstract something that is already abstracted.
[03:04:44] <StephenLynx> its completely redundant.
[07:44:16] <rakkaus_> Hi guys! I need help with a mongo 3.0.5 and REST I'm trying to use curl -s --digest admin:pass@data.dev.com:28017/serverStatus?text=1
[07:44:38] <rakkaus_> but it says me "not allowed"
[07:44:51] <rakkaus_> it was ok with 2.6.x mongodb
[07:54:26] <rakkaus_> Simple REST API The mongod process includes a simple REST API as a convenience. With no support for insert, update, or remove operations, it is generally used for monitoring, alert scripts, and administrative tasks.
[08:00:31] <rakkaus_> 2015-09-16T08:53:10.307+0100 I NETWORK [websvr] admin web console waiting for connections on port 28017
[08:00:54] <rakkaus_> that admin console is up but still can't connect on that port
[08:07:17] <rakkaus_> ok so, is there any secure way to get info about uptime of mongodb server remotely?
[08:21:31] <sorribas> rakkaus_: would it help to use the ping command? http://docs.mongodb.org/manual/reference/command/ping/
[09:05:49] <rakkaus_> sorribas thx! ping should work, in fact I need to check is it alive or not before launch my app
[10:10:27] <napnap> Hi all, I'm new to MongoDb and I try to reach my mongodb server with a client installed on other computer. The server run on debian, I just set the good bind ip in the config file but apparently is not enough.
[10:11:52] <napnap> On server : the command "mongo" connect successfully to the server. On other computer the command "mongo --host serverip" failed to connect. On the server side I see the server listen to the good interface...with the default port..
[11:28:11] <waheedi> ok maybe the question is not well written
[11:28:20] <Zelest> dump to bson.. cat, sed, awk.. import!
[11:28:24] <Zelest> because .update() is too easy :P
[11:31:06] <waheedi> alright gents and ladies, I have 120 million docs in one collection and each of these docs belong to another collection (50 documents)
[11:31:42] <waheedi> so these 50 documents IDs are inside each of the 120 million docs
[11:33:12] <waheedi> the 50 docs are becoming 45 and the 5 removed docs here already have their IDS inside the 120M
[11:33:34] <waheedi> im going to replace each of these removed IDS with anothe ID
[11:44:40] <mitereiter> sorry for the delay joannac, http://pastebin.com/HBvMtaN8
[11:48:05] <synthmeat> is $push guaranteed to leave array in order? so, literal "push" in terms of array order
[11:54:34] <napnap> joannac, the output (truncated) is : --config /etc/mongodb.conf. This is the file that I edited.
[11:56:24] <napnap> joannac, in ip_bind property if I set with : 127.0.0.1,192.168.1.1 the server listen on 255.255.255.255 (netstat output) . if I set ip_bind with 192.168.1.1 only it works.
[11:59:03] <joannac> waheedi: I still don't understand the question
[12:00:01] <joannac> mitereiter: weird. looks like a bug.
[12:00:40] <joannac> napnap: when I ask you for the output, I want the full output. Pastebin it please
[12:06:58] <waheedi> sorry joannac im not a good explainer I will go back to my cave, there I will find my asnwer :)
[12:46:44] <gcfhvjbkn> joannac: it don't know about that, probably; it worked until some point, then this happened, so i am a little disappointed that mongo acts like this
[12:47:00] <gcfhvjbkn> somehow 3 our of 5 of my shards disappeared from sh.status()
[12:52:55] <gcfhvjbkn> i'll try to look for something suspicious in the logs
[12:54:42] <joannac> gcfhvjbkn: those shards aen't tagged...
[12:56:31] <gcfhvjbkn> let me check; how do you know that?
[12:56:41] <joannac> there's no tags in the shards section?
[13:00:22] <gcfhvjbkn> no tags, but there aren't some of the shards as well, so… but yeah that's plausible; so the way i see it now, it didn't ever work in the first place: even though the data was written to local mongos on all 5 servers, there was no tagging info, so all the chunks went to the same arbitrary chosen server
[13:00:45] <gcfhvjbkn> now i wonder what happened now that it refuses to write data anymore
[13:01:26] <gcfhvjbkn> this, and why just 2 shards out of 5
[13:01:46] <joannac> gcfhvjbkn: correct, all the chunks are on a single shard
[13:03:26] <gcfhvjbkn> they were supposed to be tagged anyway, i've got no idea what happened; i really should rerun the whole thing and see if it breaks in the same fashion later
[13:03:31] <gcfhvjbkn> thanks for the guidance i guess
[14:43:59] <beekin> Just noticed that collection.aggregate will still work even if the stages aren't in an array?
[14:59:11] <saml> let's say i have different kinds of events. so, an event is {type:EVENT_TYPE, ts: TIMESTAMP}. need to make queries like give me 10 most frequent event of type1 between these timestamp1 and t2
[14:59:33] <saml> problem is between t1 and t2, there are 25mil documents and aggregate is real slow
[15:00:21] <saml> what kind of database is capable of making fast aggregations of different parameters?
[15:01:20] <cheeser> what's your pipeline look like? are you indexing properly? what does explain on your pipeline say?
[16:44:57] <bobbywilson0> Anyone know why I might be seeing this error on a collection that doesn't have any documents? "Failed to create index {:name=>"_id_1", :ns=>"z_development.messages", :key=>{"_id"=>1}, :unique=>false} with the following error: 67: exception: _id index cannot be non-unique (Mongo::OperationFailure)"
[16:46:30] <kali> bobbywilson0: you automatically get an index on _id, and it is a "unique" index. mongodb just prevent you to create an index that would conflict with the automatic one
[16:48:06] <bobbywilson0> kali: ah thank you, I wonder why mongoid is trying to create a default one
[19:32:59] <kali> morenoh149: it's a tricky one to perform, because even if you have the right index on client_id and last_access_time, the sort bit will have to be done by actually sorting the results
[19:34:06] <kali> so depending on the actual cardinalities, it may be more efficient with an index [client_id, last_access_time] or [client_id,_id]
[19:34:16] <kali> and you may need to hint the optimizer
[19:34:37] <kali> a getIndexes on the collection will help (along with the explain())
[19:35:49] <morenoh149> kali: where can I read about these cardinality/performance considerations?
[19:36:42] <morenoh149> these are the indexes I have so far _id_ , client_id_1_group_1_last_access_time_-1 , group_1_client_id_1 , client_id_1_last_access_time_1
[19:38:57] <kali> you need to check with explain() which one it picks (client_id_1_last_access_time_1 would help with selectivity, but the sort on _id has to be performed for real)
[19:41:29] <morenoh149> kali: ah I see. No index would help with the sort operation.
[19:42:59] <kali> morenoh149: sure. but that is not compatible with the last_access_time range selector
[19:43:53] <kali> morenoh149: on the other hand, with such an index (client_id, _id), the optimizer (or a manual hint) may try to scan the index and skim out the result not matching the last_access_time range
[19:44:41] <kali> morenoh149: and it may even be better with (client_id, _id, last_access_time) because mongodb could select the right record just scanning the index, with no need to look at the actual documents
[19:44:59] <kali> morenoh149: you'll have to poke around and see which one of these work better with your data
[19:46:04] <morenoh149> since I have your ear. Wouldn't I just make an index client_id,last_access_time,_id_-1 ?
[19:47:46] <kali> morenoh149: you can try that one too. but the thing is, the document you want to select will come in the order of last_access_time, so mongodb will have to try them by _id
[19:48:24] <kali> the win with this one is that mongodb will not have to physically access the various document to read their _id, as it will be present in the index
[19:51:13] <morenoh149> after reading that last link I shared. It seems to say the index can be use to sort if the sort keys are present in the query keys. So like .find({client_id: 1234, last_access_time: foo, _id: blah}}).sort(_id: -1}) would use the index efficiently
[19:51:37] <morenoh149> disregarding for a moment how querying for a specific _id is silly for now
[19:52:03] <kali> yes. but the range on last_access_time breaks this
[19:52:37] <kali> you have to picture the index as a sorted list of the fields you've chosen
[19:53:52] <kali> client_id, last_access_time makes client_id:1234, last_access_time: $gt:... with no sort quite easy. just go to the right place in the index and start to read
[19:55:02] <kali> client_id, last_access_time, _id will work well for client_id:1234, last_access_time: some_time sorted by _id: go to the right place in the index and start to read
[19:55:46] <kali> but when you combine the range query with the sort, it no longer works
[20:05:24] <Torkable> I want to do two $addToSet in one query
[22:58:27] <moqca> I'm following the tutorial on: http://docs.mongodb.org/ecosystem/tutorial/write-a-tumblelog-application-with-flask-mongoengine/ But whenever I get to the logging with shell part I get a ServerSelectionTimeoutError, error