PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 3rd of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:04:27] <Boomtime> bilb_ono: i wasn't here for the original issue - did you had a misbehaving $ne query?
[00:04:53] <bilb_ono> yeah. well it worked fine in the mongo shell, but in pymongo, it wasn’t working at all
[00:04:58] <bilb_ono> using $nin fixed it though
[00:49:17] <repxxl> hey, i have a compound index on 2 fields however sometimes i will query only one index and sometimes both should i add a single index also or the compound can be used for search both and also one
[00:52:01] <repxxl> ?
[00:53:18] <joannac> an index on {a:1, b:1} can be used if you only query on a
[00:53:35] <joannac> https://docs.mongodb.org/manual/core/index-compound/#prefixes
[00:54:47] <repxxl> joannac i dont understand this
[00:56:15] <repxxl> joannac in other words, allways will be used only one index only when i have prefixes it can use 2 indexes at same time ?
[00:58:46] <repxxl> i have a user_id field and a path_id field which can be same so i basically need both to find the correct document so should i use compound or single on both like user_id and also path_id
[02:32:27] <symbol> I've got a client with an app on Heroku using compose (mongohq) anyone have experience with it? I'm triyng to get a dump for local dev but can't find out how to.
[02:33:13] <symbol> It's a shame I can't just ssh into the heroku app and dumb from mongo.
[02:33:50] <toothrot> symbol, isn't the db accesible fomr the internet?
[02:33:54] <toothrot> from*
[02:34:14] <toothrot> in which case just use mongodump --host / --port
[02:34:29] <symbol> hmmm - I can give that a shot.
[02:34:41] <toothrot> https://docs.compose.io/backups/mongodump-mongorestore.html
[02:35:16] <symbol> Oh yeah, I messed around with that but couldn't get it to work. I'm not sure if the heroku layer is making it more difficult.
[02:36:58] <toothrot> https://docs.compose.io/backups/backup-to-local.html
[02:37:09] <toothrot> there's more info there about the same thing
[02:37:28] <symbol> Cool - I'll give that a shot.
[02:37:36] <toothrot> actually not much more, given what you said it may not help
[02:37:54] <toothrot> beyond that, i don't know :\
[02:38:35] <symbol> Gah - heroku is great for deploying but shit for maintence.
[02:40:12] <symbol> Ahhh finally - http://stackoverflow.com/questions/28348212/how-to-download-production-data-from-mongohq-in-heroku-to-local-machine
[02:40:17] <symbol> That's a shitty way to get the url
[03:12:49] <cheeser> /4/4
[07:59:30] <RonXS> Good morning all! I read that mongodb 1.1 supports php7, but i can't find a php7 extension to compile. pecl install mongodb-beta also installs a version that requires 5.*
[08:17:15] <RonXS> ok for those wondering... https://github.com/mongodb/mongo-php-driver/tree/PHP7
[08:58:18] <pyios> How do I add new index for collection if it has conflict with current documents set?
[08:58:44] <pyios> i.e: unique index with some docs duplicated
[12:00:48] <ahihi> if I run {findAndModify: coll, query: foo, remove: true} multiple times in a row, is it guaranteed that the same document won't be returned more than once?
[12:04:17] <cheeser> well, you're deleting those docs from the collection...
[12:10:54] <ahihi> yeah, I'm just wondering because I'm doing "find-and-remove document d that matches q, insert new document based on d (which doesn't match q)" repeatedly and ending up with duplicates
[12:11:08] <ahihi> sometimes
[12:35:05] <cinek> I'd like to ask for recomendation about starting a new slave. I'm creating it from mysqldump file (because there where some errors in innodb) and I hope this will remove them and than should I start the replication basing on gtid - will just setting MASTER_USE_GTID=current_pos be sufficient?
[12:35:30] <cinek> Is there a way to map gtid to master_log, master_pos ?
[12:36:16] <cinek> guys sorry - bad window :/
[13:55:21] <JoWie> is pecl "mongodb" a drop in for "mongo" ?
[14:02:19] <Derick> JoWie: no, it's a totally different driver with a different API
[14:02:29] <JoWie> ah thanks
[14:02:40] <Derick> http://jmikola.net/blog/next-mongodb-php-driver/ has some background info
[14:02:58] <Derick> http://docs.php.net/manual/en/mongodb.overview.php has the new architecture
[14:03:09] <Derick> (we're still writing migration guides)
[14:06:58] <cheeser> i'm glad *someone* understood the question. :)
[14:07:15] <Derick> pecl is like cpan :)
[14:07:21] <JoWie> haha
[14:07:27] <Derick> (but not perl)
[14:09:53] <JoWie> Is this a known issue in the legacy driver (1.6.11) or am i doing something wrong: http://pastebin.com/UYQAjGLa (timeout() on an aggregate cursor)
[14:10:47] <Derick> does it really take 30 seconds for it to timeout though?
[14:11:02] <JoWie> yea
[14:11:07] <Derick> ok, that's good :-)
[14:11:14] <JoWie> huge aggregate
[14:11:16] <JoWie> hehe
[14:11:23] <Derick> the maxTimeMS you specific is a server thing, so not influencing the client side cursor timeout
[14:11:37] <JoWie> yup
[14:11:59] <JoWie> but the ->timeout() is nit
[14:13:52] <Derick> one sec
[14:14:49] <Derick> can't remember my jira password now :-)
[14:16:06] <JoWie> jira can also be pretty slow at times :p
[14:16:21] <Derick> no, i really don't remember it
[14:16:26] <Derick> and I need it for the wiki....
[14:19:30] <Derick> how odd - just did a password reset, but no luck :S
[14:19:47] <JoWie> gotta love jira
[14:20:04] <JoWie> just upgraded to 1.6.12 with the same result
[14:20:05] <Derick> no, jira works
[14:20:15] <Derick> it's confluence that's being a pain
[14:20:52] <Derick> oh, it just took time
[14:21:08] <JoWie> oh yea i use confluence too
[14:22:01] <Derick> JoWie: can you prefix your script with http://pastebin.com/1Vb2tfBj ?
[14:34:23] <JoWie> btw $msg is unused in saveMongoLogException
[14:50:48] <JoWie> derick: http://pastebin.com/7Y4dCVX5
[14:51:23] <JoWie> i had to fiddle around a bit to get it write the log file
[14:51:50] <JoWie> i ended up using error_log, which writes the line immediately instead of using a shutdown handler
[14:52:29] <Zelest> Derick, we have 3 nodes in a replicaset.. and we basically *must* have it up and running non-stop.. in case two of the nodes die.. and "no candicate" is available, how do I handle a "worst case scenario" where I need to use the remaining node as a single server? It's mongod 2.6.x
[14:52:59] <Zelest> Derick, Can I simply just restart it without the replicaset options and it goes on and act as a standalone server?
[14:53:24] <Derick> Zelest: yes, but that will definitely mess up synching to servers - you'd have to redo it all. It's not a good thing to have to do.
[14:53:42] <Zelest> Ah
[14:54:09] <Zelest> so if 2 nodes are fscked.. and no candidate server is available.. what is the proper way to "move on" ?
[14:54:22] <Derick> JoWie: hmm, doesn't seem to set the timeout at all on the cursor... but that log includes *way* more than just one aggregation query
[14:54:38] <Derick> Zelest: bring one of the 2 broken nodes up
[14:54:38] <Zelest> add more servers? get more server before that eventually happens?
[14:54:43] <Zelest> ah
[14:55:02] <Derick> it's unlikely to get two broken though... by that time you're going to have other issues as well I'm sure
[14:55:03] <Zelest> that's all thats needed for them to work?
[14:55:13] <Zelest> true that
[14:55:17] <Derick> you need the majority up to have a writable set
[14:55:21] <Derick> one node up, you can still read
[14:55:27] <Derick> (out of 3 configured nodes)
[14:55:34] <Zelest> ah
[14:55:42] <Zelest> i require writes though, but yeah..
[14:58:56] <JoWie> derick: oh yea i logged the whole thing. i can just do the aggregation if you want
[15:02:57] <Derick> yes please
[15:15:58] <JoWie> derick: http://pastebin.com/KiGyNdVc
[15:31:29] <synthmeat> i cannot build mongodb from the source for the life of it... this is my 20th or attempt in the last few months
[15:32:33] <synthmeat> fails like this https://gist.github.com/anonymous/4673f4f28ae7e46993ce
[15:33:16] <Derick> JoWie: strange - can you file a bug report? http://jira.mongodb.org/browse/PHP
[15:33:57] <steerio> hi all, are you aware of any command line argument parsing bug in the mongodb shell v3.0.7?
[15:34:26] <JoWie> alright
[15:34:50] <steerio> I enter "mongo -u user -p foobar host:port/db", it asks me for a password, and then informs me that it's connecting to "foobar" (the password argument)
[15:35:09] <steerio> it doesn't matter if I reorder the arguments or use --username and --password
[15:35:55] <steerio> (and where it wants to connect is technically 127.0.0.1/foobar)
[15:36:58] <JoWie> have you tried --host
[15:40:17] <steerio> nope
[15:40:22] <steerio> help says this: https://gist.github.com/steerio/2883595f0aa348c29737
[15:40:47] <steerio> what I enter actually runs in other boxes, used to run here as well, and is valid according to the help message
[15:41:21] <steerio> now I tried with --host, same
[15:41:47] <steerio> I tried "mongo -u user -p foobar --host host --port 12345 dbname"
[15:41:56] <StephenLynx> i can`t see the command you are trying to run there.
[15:42:10] <steerio> in fact it tries to connect to the right host and port, but failed to get the password from the command line
[15:42:28] <steerio> StephenLynx: "mongo -u foo -p bar abc.def.com:12345/dbname" originally
[15:42:29] <StephenLynx> ah
[15:42:35] <StephenLynx> didnt noticed you pasted it here
[15:42:54] <StephenLynx> did you try -u user:foobar ?
[15:43:34] <steerio> that gives 'Error: 4 Missing expected field "pwd"'
[15:43:43] <steerio> and anyway, that's not what "mongo help" says I should do :)
[15:43:58] <StephenLynx> hm
[15:44:07] <StephenLynx> oh
[15:44:13] <StephenLynx> try without the password
[15:44:15] <StephenLynx> just -p
[15:45:01] <steerio> this works
[15:45:24] <steerio> but I want to supply the password from the command line (it's invoked by a script that gets a mongo url from a heroku config and starts up a mongo shell)
[15:45:43] <steerio> and that used to be possible (and still is according to the help)
[15:47:19] <StephenLynx> hm
[15:47:42] <StephenLynx> can`t this script open a regular connection?
[15:47:54] <StephenLynx> you really need to use the terminal?
[15:48:01] <steerio> what do you mean?
[15:48:07] <StephenLynx> what language is this script in?
[15:48:32] <steerio> ruby. it invokes "heroku config", parses the URL, constructs a command line, then does an exec "mongo #{cmdline}"
[15:48:42] <steerio> but that doesn't matter, because the same command line fails on the terminal either
[15:48:46] <StephenLynx> so why don`t you use the ruby driver to open a regular connection?
[15:49:19] <steerio> because I don't want to lose the fully fledged shell I have, including tab completion, etc.
[15:49:48] <StephenLynx> but isn`t this script that is interacting with the database?
[15:49:55] <steerio> no.
[15:49:59] <steerio> it opens up a shell for me to work in.
[15:50:14] <steerio> i just need the script to parse the url and construct a command line (as mongo doesn't accept mongodb:// style urls)
[15:50:31] <StephenLynx> hm
[15:50:32] <steerio> and anyway, having it get the db creds from heroku is also pretty convenient
[15:51:56] <steerio> here's the script: https://gist.github.com/steerio/40e7c0cfa5e6a90a83e3
[15:52:51] <steerio> as it's calling mongo with "exec", the mongo process will completely replace the ruby process and I will get an interactive shell
[15:53:34] <steerio> but all that doesn't matter, because this is equivalent with calling the command line manually
[15:53:47] <steerio> thing is, "-p arg" doesn't work anymore, but the help was not updated
[15:56:05] <JoWie> derick: PHP-1500
[15:56:47] <JoWie> although jira messed up my line breaks
[15:56:51] <Derick> that's ok
[15:58:30] <Derick> JoWie: cheers, we'll have a look at that (in a while)
[15:58:49] <JoWie> yea i was not able to assign it to the current sprint :D
[15:59:08] <Derick> :D
[17:16:20] <hip> hi
[17:18:45] <hip> i have a collection with an attr, which has a number value. i can aggr group them by single value (1,2,3,4,...), but i want to do it with predefined groups (1-5,5-10,...) how can i do this?
[17:18:55] <hip> mapreduce?
[17:19:48] <hip> or can i do a projection on the result of the groupings anyhow?
[19:28:35] <jareddlc> hey what the correct way to use keepalive in a mongodb connection string?
[21:14:31] <bilb_ono> is there some query that I can use to keep an open connection and have a callback when a new document is added?
[21:14:38] <bilb_ono> and it returns that document
[21:14:59] <bilb_ono> rather than me manually trying to query for anything newer than the last time I queried and keep track myself
[21:16:55] <cheeser> tailable cursor on the oplog
[21:17:27] <bilb_ono> oplog?
[21:17:53] <bilb_ono> this thingy: https://docs.mongodb.org/manual/tutorial/create-tailable-cursor/ ?
[21:20:29] <cheeser> yeah
[21:53:07] <diegows> hi
[21:53:19] <diegows> is there a way to setup the sync target before the replication starts?
[21:56:10] <cheeser> come again?
[21:57:40] <diegows> cheeser, that was for me?
[21:59:20] <cheeser> oui
[22:01:46] <diegows> I need to configure a 5 member replicat set
[22:01:58] <diegows> but one of the member should sync from the one of the secs
[22:02:35] <cheeser> oh, when you're configuring that node that's an option, i think.
[22:02:51] <diegows> no or it's not documented :)
[22:04:48] <Boomtime> you can set syncSource but there is no way to make it 'stick'
[22:05:24] <cheeser> makes sense. sync'ing from a secondary kinda defeats the purpose long term
[22:09:11] <diegows> it's temporal
[22:09:13] <diegows> not long term
[22:17:39] <diegows> another question now, how I change a member from removed state to a normal state again?
[22:17:44] <diegows> i just removed it to try something
[22:32:14] <cheeser> readd it?
[22:57:52] <dburles> hey might anyone know how to write an aggregation to do the following: https://dpaste.de/tKJC (hopefully the example is self explanatory)
[22:59:00] <dburles> basically i'm after distinct values across both fields
[23:31:37] <joannac> dburles: try the $unwind and then $group operator
[23:35:00] <dburles> joannac: yeah pretty much where i'm at, the piece i'm missing is how exactly i then combine the two field values
[23:37:32] <joannac> the 2 field values?
[23:38:11] <joannac> you only have one field, don't you?
[23:38:28] <joannac> I thought you were just counting instances in the exampleIds array
[23:38:56] <dburles> joannac: i mean values from both both exampleId and tags.exampleIds
[23:40:17] <joannac> dburles: well, then I don't understand your example. tell me in english what you want
[23:40:30] <joannac> how does exampleId fit in?
[23:40:56] <dburles> it's an odd data structure i know, it makes sense in context however
[23:41:31] <dburles> i'd like to output a result that is a distinct set of values from both fields across each document
[23:42:16] <dburles> does that make sense?
[23:42:19] <joannac> no
[23:42:39] <joannac> look at all the distinct values for just tags.exampleIds
[23:42:44] <joannac> in those 2 documents
[23:43:02] <joannac> there's 1 "one, 2 "two" 2 "three"
[23:43:14] <joannac> which matches your result
[23:43:37] <joannac> tell me what I'm meant to do with exampleId
[23:44:03] <dburles> oh yeah heh i guess i should have spent more time on the example
[23:44:39] <dburles> well all i'm wanting to do with it is include it as if it were part of the array
[23:46:14] <dburles> so if there was a third document with exampleId: 'four' (and perhaps an empty exampleIds array) we would see "_id: 'four', total: 1" in the results
[23:48:02] <dburles> joannac: hoepfully that's clearer
[23:49:04] <joannac> yeah, but you'll have to figure out a way to get them all named the same thing
[23:50:46] <joannac> you might have to do it in your application
[23:50:56] <dburles> hmm how do you mean? the actual data structure needs to change?
[23:53:19] <dburles> or process the results in the application?
[23:56:08] <joannac> either. change the data structure, or do it app side.
[23:56:26] <joannac> to do it with aggregation you need to have all your values under the same field
[23:57:32] <joannac> doing it app side would be as easy as db.foo.find().forEach(count[value of exampleId]++, for each value in the tags.exampleIds array { count[that value]++}
[23:57:52] <dburles> dang i had wondered if that were the case