[00:49:17] <repxxl> hey, i have a compound index on 2 fields however sometimes i will query only one index and sometimes both should i add a single index also or the compound can be used for search both and also one
[00:54:47] <repxxl> joannac i dont understand this
[00:56:15] <repxxl> joannac in other words, allways will be used only one index only when i have prefixes it can use 2 indexes at same time ?
[00:58:46] <repxxl> i have a user_id field and a path_id field which can be same so i basically need both to find the correct document so should i use compound or single on both like user_id and also path_id
[02:32:27] <symbol> I've got a client with an app on Heroku using compose (mongohq) anyone have experience with it? I'm triyng to get a dump for local dev but can't find out how to.
[02:33:13] <symbol> It's a shame I can't just ssh into the heroku app and dumb from mongo.
[02:33:50] <toothrot> symbol, isn't the db accesible fomr the internet?
[07:59:30] <RonXS> Good morning all! I read that mongodb 1.1 supports php7, but i can't find a php7 extension to compile. pecl install mongodb-beta also installs a version that requires 5.*
[08:17:15] <RonXS> ok for those wondering... https://github.com/mongodb/mongo-php-driver/tree/PHP7
[08:58:18] <pyios> How do I add new index for collection if it has conflict with current documents set?
[08:58:44] <pyios> i.e: unique index with some docs duplicated
[12:00:48] <ahihi> if I run {findAndModify: coll, query: foo, remove: true} multiple times in a row, is it guaranteed that the same document won't be returned more than once?
[12:04:17] <cheeser> well, you're deleting those docs from the collection...
[12:10:54] <ahihi> yeah, I'm just wondering because I'm doing "find-and-remove document d that matches q, insert new document based on d (which doesn't match q)" repeatedly and ending up with duplicates
[12:35:05] <cinek> I'd like to ask for recomendation about starting a new slave. I'm creating it from mysqldump file (because there where some errors in innodb) and I hope this will remove them and than should I start the replication basing on gtid - will just setting MASTER_USE_GTID=current_pos be sufficient?
[12:35:30] <cinek> Is there a way to map gtid to master_log, master_pos ?
[14:09:53] <JoWie> Is this a known issue in the legacy driver (1.6.11) or am i doing something wrong: http://pastebin.com/UYQAjGLa (timeout() on an aggregate cursor)
[14:10:47] <Derick> does it really take 30 seconds for it to timeout though?
[14:51:23] <JoWie> i had to fiddle around a bit to get it write the log file
[14:51:50] <JoWie> i ended up using error_log, which writes the line immediately instead of using a shutdown handler
[14:52:29] <Zelest> Derick, we have 3 nodes in a replicaset.. and we basically *must* have it up and running non-stop.. in case two of the nodes die.. and "no candicate" is available, how do I handle a "worst case scenario" where I need to use the remaining node as a single server? It's mongod 2.6.x
[14:52:59] <Zelest> Derick, Can I simply just restart it without the replicaset options and it goes on and act as a standalone server?
[14:53:24] <Derick> Zelest: yes, but that will definitely mess up synching to servers - you'd have to redo it all. It's not a good thing to have to do.
[14:54:09] <Zelest> so if 2 nodes are fscked.. and no candidate server is available.. what is the proper way to "move on" ?
[14:54:22] <Derick> JoWie: hmm, doesn't seem to set the timeout at all on the cursor... but that log includes *way* more than just one aggregation query
[14:54:38] <Derick> Zelest: bring one of the 2 broken nodes up
[14:54:38] <Zelest> add more servers? get more server before that eventually happens?
[15:34:50] <steerio> I enter "mongo -u user -p foobar host:port/db", it asks me for a password, and then informs me that it's connecting to "foobar" (the password argument)
[15:35:09] <steerio> it doesn't matter if I reorder the arguments or use --username and --password
[15:35:55] <steerio> (and where it wants to connect is technically 127.0.0.1/foobar)
[15:45:24] <steerio> but I want to supply the password from the command line (it's invoked by a script that gets a mongo url from a heroku config and starts up a mongo shell)
[15:45:43] <steerio> and that used to be possible (and still is according to the help)
[17:18:45] <hip> i have a collection with an attr, which has a number value. i can aggr group them by single value (1,2,3,4,...), but i want to do it with predefined groups (1-5,5-10,...) how can i do this?
[22:57:52] <dburles> hey might anyone know how to write an aggregation to do the following: https://dpaste.de/tKJC (hopefully the example is self explanatory)
[22:59:00] <dburles> basically i'm after distinct values across both fields
[23:31:37] <joannac> dburles: try the $unwind and then $group operator
[23:35:00] <dburles> joannac: yeah pretty much where i'm at, the piece i'm missing is how exactly i then combine the two field values
[23:43:37] <joannac> tell me what I'm meant to do with exampleId
[23:44:03] <dburles> oh yeah heh i guess i should have spent more time on the example
[23:44:39] <dburles> well all i'm wanting to do with it is include it as if it were part of the array
[23:46:14] <dburles> so if there was a third document with exampleId: 'four' (and perhaps an empty exampleIds array) we would see "_id: 'four', total: 1" in the results
[23:49:04] <joannac> yeah, but you'll have to figure out a way to get them all named the same thing
[23:50:46] <joannac> you might have to do it in your application
[23:50:56] <dburles> hmm how do you mean? the actual data structure needs to change?
[23:53:19] <dburles> or process the results in the application?
[23:56:08] <joannac> either. change the data structure, or do it app side.
[23:56:26] <joannac> to do it with aggregation you need to have all your values under the same field
[23:57:32] <joannac> doing it app side would be as easy as db.foo.find().forEach(count[value of exampleId]++, for each value in the tags.exampleIds array { count[that value]++}
[23:57:52] <dburles> dang i had wondered if that were the case