PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 3rd of September, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:44:41] <slaning> If I'm adding .populate to a findOne query, what's the cleanest way to then populate something inside of the property that was initially populated?
[02:06:03] <MyS> how do I configure phpmyadmin to enter with MongoDB?
[06:52:25] <k_sze[work]> I have trouble reasoning how to properly take advantage of a ReplicaSet.
[06:53:54] <k_sze[work]> For example, if there's a record that I *know* is in the primary, but when I try to find it in a replicat set using NEAREST read preference, the secondary that I hit may not have it synchronized over yet.
[06:53:57] <k_sze[work]> So what do I do?
[06:57:32] <joannac> read from primary
[06:57:55] <joannac> or accept that replication is delayed and you might be missing data when you read from a secondary
[06:58:20] <joannac> s/delayed/asynchonous/
[07:08:04] <donCams> can anyone suggest a database model for a bus reservation system?
[07:08:37] <Logicgate> that's a vague question donCams
[07:09:42] <donCams> figured T_T
[07:16:52] <Kiliko> Would it be posiable to downgrade from version 2.6 to 2.4 or export all my data and then import it again?
[07:52:47] <krion> hello
[07:53:29] <joannac> hi?
[07:53:35] <krion> i'm having trouble with a removeShard ongoing forever
[07:53:54] <krion> 1331 chunks remaining and the number doesn't decrease
[07:55:29] <krion> joannac: any idea ?
[07:55:35] <krion> i'm so new to mongo...
[07:56:20] <krion> the things is we have deleted the db on both shard manually
[07:56:39] <joannac> krion: balancer on? did chunks move but then stall, or have chunk moves started but now stopped for some reason? what do the logs say?
[07:56:53] <krion> the chunks never moved
[07:57:04] <krion> my goal is to have a clean mongodb without shard and data
[07:57:15] <kali> krion: there is a recurrent issue showing as "jumbo chunks" in the logs
[07:57:34] <krion> moveChunk result: { note: "from execCommand", ok: 0.0, errmsg: "not master" }
[07:58:26] <krion> this is what i got on the mongo log of the shard
[07:58:43] <joannac> look in the primary of that shard?
[07:58:50] <kali> krion: no trace of jumbo chunks ? :)
[07:59:26] <krion> jumbo chuck...
[08:00:47] <krion> [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
[08:00:57] <krion> kali: let me have a look
[08:01:37] <krion> joannac: this is a very complex setup for me, i only have two vm acting as a shard, but i've also two entity server and three "config" server
[08:02:04] <joannac> I don't know what you mean by "entity server"
[08:02:05] <kali> three config servers is the standard for production
[08:02:14] <krion> ok
[08:02:24] <krion> joannac: looks like the "entity" server are running mongos only
[08:03:29] <kali> it make sense. they are the actual clients of your cluster data
[08:04:01] <kali> i assume they either run some kind of application, or are targetted by other app servers
[08:04:36] <krion> for sure yes
[08:04:55] <krion> and i rm -rf all the db in /var/lib/mongo on both shard
[08:05:03] <krion> i assume this is bad, i will never do it again ;)
[08:05:06] <kali> ho
[08:05:11] <joannac> what
[08:05:15] <kali> you're not in production, right ?
[08:05:20] <krion> i see nothing as a "chunckTooBig"
[08:05:22] <krion> for sure note :)
[08:05:26] <krion> s/note/not/
[08:05:33] <joannac> krion: okay, so what's the goal here?
[08:05:49] <krion> my goal is to have a clean install without any data and shard
[08:05:54] <joannac> okay
[08:06:02] <joannac> go to each server
[08:06:03] <kali> that is relatively easy :)
[08:06:13] <krion> so i did drop every single db i can found,
[08:06:39] <kali> krion: you need to stop everything and remove everything from the disk at the same time, the restart
[08:06:39] <krion> and now i'm trying to rsRemove one of the two shard, but it's ongoing forever since chunks are remaining
[08:06:56] <krion> oh.
[08:07:00] <kali> including config servers
[08:07:26] <kali> and then, you need to recreate the replica set configuration, then the sharding configuration
[08:10:14] <krion> i know you will tell me to rtfm but is any configuration stored in database ?
[08:10:19] <krion> like it's for mysql
[08:10:46] <krion> or is it more like elasticsearch where you can remove every files except the /etc/ config file
[08:11:04] <RaffoPazzo> so there's something I dont' understand
[08:11:06] <krion> because on my "entity" server running mongos only there is no "db"
[08:11:22] <RaffoPazzo> mongo can be run on 32bits, but it has needs for 64 bits arithmetic spread everywhere
[08:11:23] <RaffoPazzo> :(
[08:11:32] <RaffoPazzo> especially atomic operations
[08:18:39] <krion> kali: joannac thanks :)
[08:21:23] <kali> krion: the replica sets and sharding config are in-db
[08:24:53] <krion> ok
[08:25:42] <krion> now i'll have to understand how to configure all of it :)
[08:25:58] <krion> since sh.status() telling me all servers down
[08:27:18] <krion> all three config server i mean
[08:27:33] <krion> but it's much more better with the two mongos started... on the vm shard
[08:29:44] <krion> kali: thanks again also joannac
[08:54:09] <sascha> hey there
[08:55:20] <sascha> i'd like to find newly added entries in a collection. there's no date or incremental id field which i can use. what would you guys recommend?
[08:55:35] <jordz> The ObjectID field contains a timestamp
[08:55:49] <jordz> so _id
[08:55:56] <jordz> if you're not creating them yourself
[08:56:02] <sascha> i do
[08:56:20] <sascha> or at least the program i'm using does
[08:56:45] <sascha> current idea: copy the collection before the tools updates it and compare it afterwards
[08:57:05] <jordz> Hmm, what do you get if you do a getTimestamp() on your objects that you create these custom ID's?
[08:57:14] <sascha> sec
[08:57:15] <jordz> if you check, you might actually get the timestamp it was created at
[08:57:24] <jordz> in which case you can just compare times
[08:57:57] <sascha> well, the thing is. object might be updated
[08:58:00] <sascha> does the timestamp change then?
[08:58:51] <jordz> I think it has two, an updated and a created
[08:58:55] <jordz> but I'm not sure
[08:59:00] <jordz> can anyone confirm that?
[08:59:58] <jordz> I think it stays the same sascha
[09:00:14] <jordz> Because otherwise, if I'm not mistaken, the object ID would change
[09:00:26] <jordz> but then again, I'm not 100% sure.
[09:00:47] <jordz> test it :)
[09:01:13] <sascha> 2014-09-03T10:56:39.634+0200 Error: invalid object id: length
[09:01:36] <jordz> what are you attempting to do?
[09:02:40] <jordz> sascha, it stays the same.
[09:23:34] <sascha> jordz: ObjectId("1f2ff3087d78402b9f2226fd6e82f32f.16").getTimestamp()
[09:23:47] <sascha> that id is in the "_id" field in my object
[09:27:10] <jordz> .16?
[09:28:57] <jordz> http://docs.mongodb.org/manual/reference/object-id/
[09:41:26] <sascha> i know, how they look, but that's the script which creates those IDs
[09:49:44] <jordz> is it not throwing an exception I see?
[09:57:46] <krion> what's exactly a replica set ?
[09:58:24] <krion> i could found an explanation on the documentation, only how to create etc..
[10:01:45] <jordz> krion: A replica set is just mutliple mongod's running the same data. Say you had 3 machines, they all contain the same data and automatically replicate and failover
[10:02:13] <jordz> http://docs.mongodb.org/manual/replication/
[10:02:19] <jordz> the top paragraph
[10:03:33] <sascha> jordz: nope, the only error i had, was the one i posted
[10:03:35] <sascha> > ObjectId("1f2ff3087d78402b9f2226fd6e82f32f.16").getTimestamp()
[10:03:37] <sascha> 2014-09-03T10:56:39.634+0200 Error: invalid object id: length
[10:03:50] <sascha> the id is simply a string which has been saved to "_id"
[10:03:53] <jordz> sascha, that error is because the ObjectId is incorrect
[10:04:04] <jordz> no, it doesn't work like that
[10:04:08] <sascha> yeah, it's not a mongodb genereated id
[10:04:15] <sascha> generated*
[10:04:39] <jordz> sascha, but that ID is incorrect, it needs to contain 12bytes only
[10:04:55] <sascha> i know
[10:05:15] <sascha> as i said, the id is nothing else but a string being written into the _id field
[10:05:24] <Derick> if it's not mongodb generated, why use ObjectId around it?
[10:05:46] <sascha> Derick: because i didn't know it better, before i figured out how objectid works
[10:06:14] <Derick> but how could it have been created like that then? :-)
[10:07:47] <sascha> the script pulls data from a website which also contains this weird looking id
[10:08:02] <sascha> and instead of saving this id into something like "id" or "uuid" it saves it to "_id"
[10:09:42] <krion> jordz: that was i tought, thanks
[10:10:27] <krion> hum...
[10:10:40] <krion> configsrv aren't holding database ?
[10:12:10] <Derick> sascha: but you shouldn't be able to even create an ObjectId like that... ObjectId is a special type. You can just put a string (or float) in _id, but that is *not* an ObjectId.
[10:12:32] <sascha> Derick: i know ...
[10:12:47] <Derick> ok - then still, how did you manage to create a "wrong" ObjectId? :-)
[10:14:12] <sascha> Derick: simply by doing db.testcol.insert({_id: '1234567890'}) i guess
[10:14:48] <sascha> that's what the script does
[10:15:05] <sascha> it ignores the fact, that _id could be used as objectid with a generated id
[10:16:25] <krion> can i have shard without replica set ?
[10:16:34] <Derick> krion: yes, but you shouldn't
[10:17:06] <krion> what's the point to have 2 two vm, each being a single replica set and each having a single shard
[10:17:14] <krion> s/2 two/two/
[10:17:22] <krion> Derick: it's a preproduction env
[10:18:15] <Derick> krion: replication is there for automatic failover mostly; which you will need when deployed on AWS f.e.
[10:21:38] <krion> http://pastebin.com/kjj875C2
[10:21:56] <krion> so they didn't have replicaset at all, only 2 shard, if i interpret it correctly
[10:22:19] <krion> Derick: ok
[10:22:24] <Derick> krion: yes, it's certainly possible for dev envs. I wouldn't do it for staging or production though
[10:22:45] <krion> it's a staging env... but hey, i do what they said
[10:23:25] <krion> Derick: can you just confirm that the empty _id in my pastebin confirm a no-replicaset situation ?
[10:23:54] <Derick> krion: there is no empty _id in there
[10:24:42] <krion> Derick: oh yes... you're right...
[10:25:42] <krion> sh.status() isn't good for that :)
[10:25:53] <krion> i remember rs.conf return me null
[10:26:12] <krion> anyway, thanks Derick :)
[10:28:35] <kees_> Derick, i had this problem with php/mongo and int's converting to float, it was caused by storing the values with a 1.5 driver and retrieving them with a 1.4 driver.. the 1.4 driver saw them as floats
[10:29:05] <Derick> kees_: yeah, we made this better in 1.5 by having native_long on
[10:29:12] <Derick> meaning that you'll get proper 64bit ints and no floats
[10:30:08] <kees_> aye, but 1.4 thinks it is a double/float and stores them as a double/float
[10:30:36] <Derick> you can just set mongo.native_long=1 in the 1.4 driver
[10:31:06] <kees_> i upgraded all the drivers to 1.5 ;)
[10:31:21] <Derick> that fixes it too as it's =1 by default there
[10:48:23] <krion> hum, now there is something i don't get
[10:48:43] <krion> where should i connect to have a global view of all the replicaset in my configuration ?
[10:48:49] <krion> s/configuration/cluster/
[10:59:55] <krion> Derick: now that i have a running cluster, i can confirm that my client doesn't have rs initiated before
[11:00:17] <krion> since a shard looks like "replica set name" "replset/hostname:port"
[11:00:36] <krion> and my pastebin doesnt have "replset/hostname:port" form but just "hostname:port"
[11:07:31] <remonvv> \o
[11:07:38] <Zelest> o/
[11:51:07] <talbott> hello mongoers
[11:51:11] <talbott> quick q
[11:52:08] <talbott> is it possible to run an update to push a new subdocument onto a doc array
[11:52:24] <talbott> only if the subdocument does not already exist?
[11:52:35] <talbott> trying to do something with elemmatch, but cant figure it out
[11:55:44] <talbott> i know there's addToSet
[11:55:55] <talbott> but not sure that works for whole subdocs
[12:11:55] <evildead> hello all
[12:12:08] <evildead> i failed to log into mongo with the localhost exception
[12:12:27] <evildead> i just installed mongodb-server on my debian wheezy
[12:12:39] <evildead> the process is running with the default config file
[12:13:05] <evildead> but the mongo --verbose give me this:
[12:13:06] <evildead> Wed Sep 3 14:09:53 uncaught exception: login failed
[12:13:07] <evildead> Wed Sep 3 14:09:53 User Assertion: 12514:login failed
[12:13:10] <remonvv> talbott: $addToSet is okay if your entire subdocument will match exactly, if not you can put the appropriate condition in the update query.
[12:39:44] <elfuego> is there a technique that can be used to detech when a mongodb server has started?
[12:45:40] <braz> there are a number, if it is on a Linux box and installed as a service you could use "sudo service mongodb status"
[12:46:00] <braz> how are you trying to check it - is it via the command line or via a query to the server direct ?
[12:52:35] <elfuego> braz: via commandline
[12:53:37] <elfuego> its in a docker container, which is running mongod, so I don’t have access to “service mongod status”
[13:02:02] <jordz> not used docker but can you use ps?
[13:02:15] <jordz> ps aux | grep mongod will tell you if it's got a process ongoing
[13:02:49] <elfuego> jordz: yes I can
[13:03:24] <elfuego> jordz: but will that tell me whether or not mongod is ready and waiting for a connection?
[13:04:30] <jordz> I believe if mongod is running it usually means it's accepting connections BUT if you want to be sure you could try and connect locally and if it fails then assume it doesn't
[13:05:14] <jordz> so "ps aux | grep mongod".. "mongod is there".."mongo <somecommand>".."connects locally and runs"
[13:05:21] <jordz> doesn't return an error code
[13:05:23] <jordz> or whatever
[13:05:28] <jordz> would something like that work?
[13:07:53] <elfuego> I think that will work, but its a bit hacky, I did something similar with a node server, using a curl script to try connecting until it gets a 200
[13:08:13] <elfuego> I was hoping for a cleaner way of doing it
[13:08:23] <elfuego> on mongo
[13:09:33] <jordz> You could actually attempt some form of TCP connection, I'm sure there'll be some form of handshake the shell does with the server and you could just use that. Like I said, I'm just mitigating any kind of linux-ish stuff that docker might not allow
[13:16:12] <elfuego> True
[13:16:59] <elfuego> jordz Just to give you a little context, i’m trying to run mongoimport to initialize the database once it has started.
[13:17:21] <elfuego> I was wondering if there exist any other method in doing so
[13:19:38] <jordz> Well, even if you were to use service mongod status, there are cases where it will not connect straight away and there's no way to tell other than looking at the logs
[13:20:08] <jordz> for instance, when starting a brand new instance on a fresh machine, the initial boot creates the memory mapped files which can take some time
[13:20:28] <jordz> mongod's status will be running but connections will not be available until it's finsihed
[13:20:33] <jordz> finished*
[13:20:52] <jordz> which can take between seconds and minutes (in some of my cases)
[13:21:17] <weisjohn> is there a way to use the timestamp in the object id in either a group or an aggregation?
[13:21:26] <weisjohn> i’m having a lot of trouble finding examples
[13:27:23] <lha13> Hello I was wondering if someone would be able to assist me in trying to perform a group query upon a nested object in mongo?
[13:28:50] <elfuego> jordz: is there a way to speed up the memory mapping of files?
[13:29:34] <elfuego> jordz: are there additional steps I can do in the mongo installation to speed up start up time?
[13:38:17] <jordz> elfuego: potentially yes but have you checked if you get that delay?
[13:38:35] <orenDB> Question about aggregation frame work + EXPLAIN
[13:38:55] <orenDB> db.device_raw_events.aggregate( [ { $group : { _id : { executed: "$executed" }, count: { $sum: 1 } } } ], { explain:true } )
[13:39:04] <orenDB> why it doesn't return explain plan?
[13:40:47] <elfuego> yes jordz: about a 2 minute delay
[13:40:52] <cheeser> you're on 2.6?
[13:41:10] <orenDB> yes
[13:41:23] <orenDB> db.serverStatus()
[13:41:37] <orenDB> no~
[13:41:39] <orenDB> sorry!!
[13:41:45] <orenDB> "version" : "2.4.6",
[13:41:51] <cheeser> there you go :)
[13:41:55] <orenDB> thanks!
[13:42:01] <orenDB> :(
[13:42:32] <jordz> elfuego, I *think* it might be possible by having them already created. i.e: copy the /data/db dir over
[13:43:19] <jordz> but not sure.
[13:50:42] <elfuego> jordz: sounds like a good idea I will try it, thanks
[14:08:35] <Derick> krion: sounds good then
[14:34:30] <BaNzounet> Hey, I want to clone a collection, I've to use db.cloneCollection right ? But I'm not sure how I've to use it, I want to clone a collections name foo to bar how do I do that ?
[14:36:12] <BaNzounet> I'm looking for something like db.source.copyTo("target");
[14:36:14] <cheeser> http://docs.mongodb.org/manual/reference/method/db.cloneCollection/
[14:37:41] <BaNzounet> cheeser: thanks, I already read that, I still don't know how I should write it to clone source to target :s
[14:39:19] <cheeser> well, this command clones between hosts...
[14:39:26] <cheeser> so not quite what you want it sounds
[14:39:55] <cheeser> what doesn't work about copyTo()?
[14:41:03] <BaNzounet> http://docs.mongodb.org/manual/reference/method/db.collection.copyTo/#behavior
[14:41:10] <BaNzounet> the warning scared me :D
[14:42:32] <BaNzounet> and it's also say "Consider using cloneCollection() to maintain type fidelity."
[14:50:12] <BaNzounet> I exported the collection and imported it under another name, It did the job
[14:51:54] <cheeser> good idea
[15:10:23] <ejb> I have a structure like this { shows: [ { foo: bar }, { baz: bar } ] }. How can I $unset foo from any shows that contain it?
[15:11:41] <ejb> Not working: db.col.update({ 'shows.$.foo': { $exists: 1 } }, { $unset: { 'shows.$.foo': 1 } })
[15:14:28] <remonvv> ejb : update({'shows':{$elemMatch:{foo:{$exists:true}}}}, {$unset:{'shows.$.foo':1}})
[15:15:22] <andrewhathaway> This may help? http://stackoverflow.com/questions/4987289/how-to-remove-column-from-child-collection
[15:15:33] <remonvv> ejb : or update({'shows.foo':{$exists:true}}, {$unset:{'shows.$.foo':1}}) if you need exact matching
[15:16:25] <remonvv> ejb : Actually, just use the latter. It's partial match already. Brainfart
[17:36:18] <pasichnyk> Are there anythings that people have noticed in robomongo that BREAK after upgrading to Mongo2.6, or is it simply that some new 2.6 stuff isn't able to be done via the GUI yet?
[17:38:14] <JohnnyDBA> Can someone help me with the two-factor authentication in MMS? It worked when I first logged on, now that I'm logged on it's asking me for another verification code for any changes I'm making and the text messages are not coming through to my phone anymore. Is there a way to just shut it off? We have ops issues that we need to address.
[17:39:50] <pasichnyk> JohnnyDBA, i haven't seen that happen before. Will it accept our old code?
[17:40:02] <JohnnyDBA> It is not accepting the code I used to log in
[17:40:23] <pasichnyk> is it possible its goign to someone elses phone?
[17:40:26] <JohnnyDBA> I have opened a support case for this in your Jira system
[17:40:41] <JohnnyDBA> It went to my phone when I logged in, after that I haven't received one
[17:41:15] <JohnnyDBA> the phone # is correct under settings
[17:41:28] <pasichnyk> weird, no clue. I haven't seen any issues with it before.
[17:41:48] <pasichnyk> I would email their support...
[17:42:03] <JohnnyDBA> do this instead of opening the jira case?
[17:42:13] <pasichnyk> or that :)
[17:42:18] <JohnnyDBA> ok
[17:42:22] <JohnnyDBA> thank you
[17:42:23] <pasichnyk> if its pressing, i'd do both. :)
[19:14:19] <saml> hello
[19:14:59] <saml> db.articles.find({brand:'entertainment'}).sort({publishDate:1}).limit(1) oldest article
[19:15:09] <saml> it says published in 2010
[19:15:16] <saml> but i do see articles published in 2007
[19:15:20] <saml> i think index is messed up
[19:15:22] <saml> am i right
[19:15:25] <saml> how can i reindex?
[19:17:03] <daidoji> saml: http://docs.mongodb.org/manual/tutorial/remove-indexes/
[19:17:22] <daidoji> saml: http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/
[19:17:51] <saml> hrm maybe it's timezone
[19:20:03] <saml> no.. some publishDate are string. some are ISODate lolololololhehehehehehhe
[19:21:51] <rkgarcia> saml, welcome to schemaless :P
[19:22:04] <saml> hehehe
[19:22:14] <saml> thankfully only 100k documents out of 600k
[19:22:26] <saml> mongodb webscale ftw
[20:34:34] <geoffeg> If I run a remove() on a non-indexed field on a very large collection, will I incur a writelock that will prevent all writes from occuring while the remove() is running?
[20:35:49] <cheeser> all writes take a write lock, indexed or not.
[20:38:53] <geoffeg> understood, but will that write lock block all other writes or will it pause at times to allow other ops to occur?
[20:41:35] <cheeser> it'll periodically yield
[20:42:49] <geoffeg> ok
[20:43:00] <geoffeg> thanks
[20:43:03] <cheeser> np
[20:48:35] <cheeser> 1
[21:12:03] <huleo> hi
[21:14:08] <huleo> http://pastebin.com/rFfsnAdu
[21:14:23] <huleo> simple point, geoJSON
[21:15:33] <huleo> "2dsphere" index created on proper field
[21:15:35] <huleo> hmm
[21:15:36] <huleo> yeah
[21:15:40] <huleo> that's as much as I see
[22:23:51] <Ryan_Lane> hi there. is there a better way for this init script to actually see if mongo is running or not? https://github.com/mongodb/mongo/blob/master/debian/init.d#L192
[22:24:04] <Ryan_Lane> because mongo very often takes longer than 10 seconds to start
[22:24:40] <Ryan_Lane> and it's seriously screwing with our config management runs (which then breaks our testing infrastructure)
[22:25:06] <Ryan_Lane> there must be some actual way to know if mongo has started and is ready for service, right?
[23:39:31] <pasichnyk> does robomongo work ok with 2.6, or has anyone found any blocking compatibility issues (since its still on 2.4 shell)?