[00:10:45] <daidoji1> hello, in pymongo is it possible to have multiple '$addToSet' operations in an update?
[00:10:49] <daidoji1> and how might I pass that in?
[00:11:06] <daidoji1> currently its messing up because those 'addToSet' strings are treated as keys in a python dict
[01:42:51] <sabrehagen> i'd liek to run db.currentOp() to find long running tasks. this blog post suggests i need to be an admin user: http://blog.mongolab.com/2014/02/mongodb-currentop-killop/
[01:43:38] <sabrehagen> i've authed against my admin daatabase and my role is 'userAdminAnyDatabase' but runnign db.currentOp() returns { err : 'unauthorized' }
[01:43:43] <sabrehagen> where might i be going wrong?
[02:52:42] <Boomtime> sabrehagen: hi, sorry for the delay, is your question still relevant?
[02:53:28] <Boomtime> in short, userAdminAnyDatabase is a role for administering users, it can't do much in the way of data or realtime control type access
[02:53:58] <Boomtime> however, that role does permit you to either grant yourself the extra needed privileges, or to grant somebody else such privileges
[02:55:22] <Boomtime> the most restrictive built-in role that includes the ability to currentOp is clusterMonitor
[02:56:05] <Boomtime> if you don't want all the extra stuff that comes with, you can just grant yourself the specific privilege to issue the 'inprog' action (which is the real command behind currentOp)
[03:20:49] <sabrehagen> Boomtime: thanks, still relevant and helpful answer
[03:21:53] <sabrehagen> i'll try granting myself that privilege and seeing how i go
[03:23:03] <sabrehagen> also, in 2.6 .explain() returned a very different result to explain() in 3.0. i've read about the changes on mongodb.org, but can't find out how to run the same style of explain() as 2.6. i'd like to know the cursor type and nscanned* stats in 3.0
[04:23:50] <cppking> hello guys , in a 3 nodes repl-set , 1 master and 3 secondary, if one secondary is down , at the meantime, master failed, will this repl set still work normally?
[04:25:56] <Boomtime> "in a 3 nodes repl-set , 1 master and 3 secondary" <- are there 3 nodes, or 4?
[04:26:54] <Boomtime> in either case, if 2 members fail, then you don't have a majority - also, avoid the term 'master' as that was used in the old system, the correct term is primary
[04:31:25] <cppking> sorry, 1 primary and 2 secondary
[04:31:58] <cppking> one secondary is down , then primary failed , will this replset still work normally
[04:38:05] <joannac> it will work for reads (assumign you allow secondary reads) but not writes
[06:03:36] <cppking> joannac: what if 1 primary and 3 secondary , primary failed when one secondary is down , after election , while this repl still work normally?
[06:04:26] <sabrehagen> in 2.6 .explain() returned a very different result to explain() in 3.0. i've read about the changes on mongodb.org, but can't find out how to run the same style of explain() as 2.6. i'd like to know the cursor type and nscanned* stats in 3.0
[06:11:15] <btcbuy314> I'm making a music playlist website and I have two tables, one which stores songs, and the other, playlists, which stores title of playlist and an array of _id of different songs
[06:11:37] <btcbuy314> I'm having a lot of trouble updating the order of the array
[06:12:04] <btcbuy314> how would I do this? Everything seems to have no errors, but then the database table is unchanged
[06:27:50] <phreelynx> @btcbuy314: please elaborate "I'm having a lot of trouble updating the order of the array"
[06:41:05] <btcbuy314> so basically, i have an array of ids saved in the playlist table, I iterated through the array, changed each one to the next one and the last to one before
[06:41:36] <btcbuy314> and the last one to the first*
[06:44:39] <phreelynx> how about having a separate key, in the playlist documento, for the current_song which contains the index of the song in the playlist array. Once completed you can just change current_song into index+1
[06:46:01] <btcbuy314> no I want to change the order of the array
[06:46:27] <btcbuy314> im not actually changing the order to +1 , i just said that to make it simpler
[06:46:54] <btcbuy314> im trying to implement a reordering of the playlist
[06:47:01] <btcbuy314> so number 5 could get moved to number 2
[06:50:56] <btcbuy314> heres a pastebin, the msg.indexList is basically a list the same length as doc.videos but whats in it would be something like [0,2,1,4]
[06:51:03] <btcbuy314> to show that the new order needs to be that
[06:53:39] <btcbuy314> something that I think will help solve this is doc.videos.pop() sucessfully deletes the last video, but if I try to duplicate the first video with doc.videos.push(doc.videos[0])
[06:54:05] <Silverfall> if I have 100000 documents and I want to check the one with the highest "{ size: ...}" value, does limiting search results increase my performance or it's useless since mongo has to iterate over the whole collection anyway
[06:54:44] <preaction> but it only has to ship to you the one that matches
[06:54:46] <phreelynx> btcbuy314: have you tried $slice'ing it ?
[06:55:00] <btcbuy314> not really search what you mean by that
[06:55:01] <preaction> Silverfall: and if you find yourself making that query often, you might want an index
[06:56:41] <btcbuy314> phreelynx: What do you mean?
[07:02:49] <Silverfall> preaction so even if I already have a running database full of "size" values, entering this single command in the shell "db.mycollection.createIndex( { size: 1 } )" would allow me to permantently keep them ordered by size without making mongo work too much?
[07:03:35] <preaction> it creates an index that's ordered by size. as for how much work happens, well, that's the trade-off. the index must be updated when a write happens
[07:04:40] <Silverfall> so unless I have an amount of X documents it isn't worth to make the index to improve overall performance.. could you define the value of X in this case?
[07:05:24] <preaction> no. that's what benchmarking is for
[07:05:44] <preaction> there are too many variables for me to even fathom a guess at what one of them, X, could be
[07:06:17] <Silverfall> well, I see two variables involved: the frequency I write new documents and the frequency I check the whole collection ordered by size
[07:07:02] <preaction> you're missing: the amount of memory used to hold indexes. the size of the documents that must be scanned. the speed of the disk used for storage.
[07:07:25] <preaction> the number of shards that must keep up-to-date indexes
[07:07:32] <Silverfall> ahahah, I think I'm just gonna leave it like it is now.. if it starts to slow down I'll apply an index XD
[07:28:13] <cppking> In mongo 2.x , Does every insert or update action will get an implicit lock which is database level?
[07:29:08] <nishu-tryinghard> guys i made dump of a remote db and i want to restore it locally and use it.
[07:30:59] <cppking> nishu-tryinghard: what's the problem?
[07:31:55] <nishu-tryinghard> cppking, nvm i was just new so i thought dump will directly make a db locally. I came to know that i need to use restore as well to make a copy from the dump.
[07:32:55] <cppking> yes, you need restore from dumped files
[07:33:06] <nishu-tryinghard> cppking, can i use mongorestore without --dbpath param so that it can use the default one?
[09:16:16] <Iskandar> how do i check if my conf is valid ? i get a failed but cant find where the conf error is
[09:17:46] <csst0111> Somehow i messed around with my db and trying to re-import the json files gives me : "error inserting documents: E11000 duplicate key error index: ..."
[09:20:22] <cppking> Iskandar: check the mongod error log
[09:20:52] <Mindiell> hi there, is the difference between distrib packages and .deb packages is really a problem ?
[09:21:05] <Mindiell> I would just like to install and test mongoDB
[09:21:16] <Mindiell> should I go directly on .deb packages ?
[09:25:49] <Iskandar> btw, on redhat im missing the mongos service in /etc/rc.d/init.d since i want ot run a mongos and config server on one machine for testing purposes
[10:20:10] <Mindiell> what is the purpose of databases in mongodb ?
[10:20:40] <Mindiell> is it the same thing to have two collections in one database and two databases with one collection in each ?
[10:20:50] <Mindiell> I mean, structurally, not functionnaly
[10:33:38] <Iskandar> i have made this script: http://pastebin.com/THmUJj1E when i run mongos --config /etc/mongos.conf the mongos runs fine and dandy, but when running service mongs start, i get a fail. The mongos.log leaves no trace for me
[10:33:55] <Iskandar> so i think the script it not correct, but i dont see it
[10:35:56] <sabrehagen> cppking: i've tried the verbosity modes, none show the cursor type
[10:36:47] <cppking> sabrehagen: well, I don't know same as you
[13:59:01] <CustosLimen> deathanchor, yeah - but nvm
[13:59:26] <deathanchor> CustosLimen: depends how big the data is on your configsvr, but usually a minute
[13:59:41] <deathanchor> doesn't affect anything while it goes on, cause you should have your balancer off
[14:04:10] <CustosLimen> deathanchor, I was actually not clear on what exactly was effected - when I realized its only config server data I stopped worrying
[14:45:39] <yopp> cheeser, to simplify. I've got few sharded collections for pre-aggregated data (raw data, 1h, 1d, 1y & etc). Aggregation is made step by step, from raw to 1y collection. Is there any benefits in using aggregation framework instead of map/reduce in this case?
[14:48:10] <cheeser> aggregation will be more efficient OOB since it can work with the bson directly.
[14:48:21] <cheeser> map/reduce will involve conversions to json
[14:48:42] <cheeser> aggregations can be done across the shards
[14:49:43] <yopp> So, if right now with map/reduce we're in fact running only one map/reduce at the time, with aggregation they can run simultaneously? With document-level locking, only affected documents will be locked, right?
[14:50:13] <cheeser> internally, aggregations are more parallelizable than map/reduce
[14:52:03] <yopp> Apart of having 100mb limit, is there any other "catches" with aggregation framework?
[15:07:32] <repxxl> hello, i have 2 fields like "name" : "This is a headline", "tags": ["School", "5B", "2015"] i wanna search by name and the same time also search array values and return the matching with a priority to show tag founds first then name
[15:08:15] <repxxl> what do i need exactly ? i was thinking something like regex "name" $or and $in together
[15:24:04] <deathanchor> yopp: you still ahve the 16mb limit on the results unless you use a cursor
[15:27:14] <repxxl> cheeser i wanna search for single field or array values giving a priority to return first array values in mongo cursor
[15:30:29] <cheeser> you dont' necessarily need regex for that.
[15:36:27] <ruphin> Hi, I have some questions about upgrading my config servers to 3.2. I'd like to use the new config server setup with replica sets, since I am having issues with config servers running out of sync. I haven't found a manual anywhere.
[15:37:05] <cheeser> i'm not sure those have been published yet.
[15:38:59] <ruphin> Can anyone here give me a few quick pointers on how to migrate? I have some experience working with mongo, just need to know what the basic strategy would be
[15:39:55] <ruphin> I'm thinking it would probably be easiest to isolate a single config server in my current setup, restart it with 3.2 and configure it as a replica set member, then add new empty servers to that replica set
[15:40:34] <ruphin> But I'm not sure if I can simply start a 3.2 config server on a 3.0 database
[15:40:51] <ruphin> If someone can confirm that this is safe, I can go ahead with the above procedure
[15:42:38] <repxxl> cheeser i now have something like this http://pastebin.com/KEv7qK8V
[15:43:18] <repxxl> cheeser but its not working for me can you maybe tell me what is my problem ? or how to do it right ty
[15:54:39] <diegoaguilar> cheeser, know when it will be out as stable release?
[15:55:15] <repxxl> cheeser the appropirate documents i mean so i get a list of all found "name" and "tags" but get the tags found documents first in the cursor
[16:01:56] <cheeser> because that's not public info yet and subject to change
[16:06:32] <x1nux> Hello ! I have a mongo DB 2.6.7, and my hard drive is full, I deleted some records, but the disk size didn't go down, is there a way to delete those records to reduce disk size ?
[17:27:27] <Jacoby6000> If i have a collection with a.b.c.foo and d.e.f.foo, how can I set all keys "foo" to a value?
[17:27:47] <Jacoby6000> basically I want to set a key to a certain value, regardless of where it is in the tree
[17:35:10] <symbol> Jacoby6000: As far as I know, that isn't possible within Mongo. You'd need to handle that server side or structure your data in a different manner.