PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 2nd of October, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:09:56] <fg3> trying to efficiently find docs with array of ids using mongojs driver - any ideas?
[00:10:54] <paulkon> is there a point in which $slice's performance gains level off vs. range-based $skip/$limit dbref population?
[06:14:14] <spicewiesel> Good morning everbody... o/
[08:52:28] <remonvv> \o
[08:53:21] <remonvv> Anyone any theories on why a group of mongod processes in a sharding cluster indefinitly causes permanent write locks on a database that is heavily queried while chunks are being rebalanced?
[10:40:53] <bertodsera> how do I convert back BinData(3,"XXitNepfEeGhg3BWgbKcRw==") to a "usual" uuid?
[11:17:39] <ATX_123> anyone in here now?
[11:19:15] <Derick> ATX_123: it's always better to just ask your question. People will answer when/if they have time
[11:20:24] <ATX_123> I have a cluster that I fear might be stuck on a balancing round. what should i look for in the various place to determine if the lock is vestigial?
[11:21:08] <ATX_123> and if the lock turns out to be vestigial, can I just remove it from the config server?
[11:25:17] <ATX_123> as a followup. the reason i think it might be hung is that the mongos that kicked it off (and has the lock) was restarted prior to the completion. I am not seeing any "Balancer" logs there for quite some time
[11:33:26] <clarkk> can someone tell me whether I should get tab completion when using the shell, to complete the collection name when I type the command "use my-collection-name...." ?
[11:34:01] <ATX_123> clarkk: may be version specific, but I don't
[11:34:02] <clarkk> ie if I type... use my-coll<TAB> to complete the word my-collection-name
[11:34:35] <clarkk> ATX_123: I am using version 2.4.6. I just upgraded from the version shipped with ubuntu, 2.0.1
[11:34:46] <clarkk> ATX_123: neither seem to have this.
[11:35:26] <clarkk> ATX_123: it would be such a useful feature
[11:38:38] <clarkk> ATX_123: apparently it should support it http://docs.mongodb.org/manual/faq/mongo/#does-the-mongo-shell-support-tab-completion-and-other-keyboard-shortcuts
[11:42:19] <clarkk> what does "Type "it" for more" mean?
[11:42:23] <clarkk> what is "it"?
[11:42:32] <Derick> stands for "iterator"
[11:42:48] <clarkk> Derick: how can I find that out without asking?
[11:42:54] <Derick> did you type it?
[11:43:05] <clarkk> "help it" does not give any results
[11:43:18] <clarkk> Derick: no, that just runs the command
[11:43:25] <Derick> yes, that's all it does
[11:43:31] <Derick> it shows you the next set of results
[11:43:51] <Derick> "help" has it though
[11:43:56] <clarkk> Derick: but my terminal lines has not been consumed
[11:44:04] <Derick> huh?
[11:44:25] <clarkk> I have about 3 times the space in my terminal. Why is it paging the results?
[11:44:46] <Derick> ah, it only checks the window size when you *start* "mongo"
[11:44:53] <Derick> it doesn't adjust afterwards
[11:44:58] <clarkk> ah
[11:45:03] <clarkk> ok, that makes sense
[11:45:04] <Derick> I think I have already sent in a bug report for that
[11:45:52] <clarkk> Derick: ah, good work!
[11:46:50] <ATX_123> can anyone help shed insight as to what logs, locks or other artifacts I should look for to determine if a balance round is hung up?
[11:47:03] <clarkk> Derick: hmm, I quit the mongo shell and then went back into it while my term was full screen, and it still page the results
[11:48:31] <Derick> clarkk: hmm, file a bug report for that then please
[11:49:33] <clarkk> Derick: I will have to come back to that, I'm afraid. I have only just started with mongo, and so I don't know what is expected and what isn't. I need to get up to speed first
[11:50:26] <Derick> clarkk: if you think something doesn't work like you expect, Stack Overflow is a good choice to ask questions on (As well)
[11:50:41] <clarkk> ok thank you Derick
[11:51:17] <stefancrs> morning
[11:52:42] <clarkk> I am trying to remove documents where the name field is empty. I've tried db.collection.remove({name:null}) and db.collection.remove({name:""}) but neither work. Could someone tell me what query I need to use, please?
[11:53:43] <remonvv> @Derick: Do you, or anyone near you, know what the best way forward is to report a rather major issue where a mongod is spinning with 150-700% writelock continuously? What logs are relevant? We have in that state now but it's hard to reproduce.
[11:54:11] <Derick> remonvv: the support team might know better. here in the LDN office it's all sales
[11:54:44] <remonvv> Derick: Okay, that's unfortunate ;)
[11:54:59] <Derick> Number6 might know
[11:55:17] <ATX_123> Derick, when does the support team start signing in? are they all Pacific time?
[11:55:38] <Derick> no, they're east coast, west coast, australia and dublin - but that's commercial support
[11:56:09] <Derick> here is just "best efforts when we have time"
[11:59:22] <remonvv> Derick, I understand.
[12:03:50] <Derick> remonvv: I know you do. I can't help you with this though :-/
[12:04:32] <clarkk> ugh, I realise that this is very basic, but please would someone help me? I'm pretty sure it worked before I upgraded. This is the doc: { "name" : "", "email" : "", "age" : null, "_id" : ObjectId("524b488d9543e3cb3200001d") }
[12:04:46] <clarkk> I am trying to find it based on the name field being empty
[12:04:54] <remonvv> Derick: It's cool ;) We'll investigate further ourselves.
[12:05:07] <clarkk> neither {name:""} or {name:null} do not work
[12:05:16] <Derick> name:"" should work
[12:05:32] <clarkk> Derick: yes, that's what did last time
[12:05:41] <Number6> Support is 24x7 - we never sleep
[12:06:06] <Derick> Number6: do you have an idea wbout remonvv's issues?
[12:06:07] <joannac> What Number6 said.
[12:06:10] <Derick> about*
[12:06:11] <clarkk> oops
[12:06:13] <clarkk> one moment
[12:07:35] <clarkk> ugh, ok, I was using db.collection.find({name:""}) rather than the actual name of the collection :/
[12:07:45] <clarkk> thanks for your response Derick
[12:07:51] <Derick> np :)
[13:01:35] <lessless> what driver I want to use with ruby - Mongoid or Moped or something else?
[13:01:37] <maasan> I studied that MongoDB document moved if size increases. When it happens? Everytime we write the mongodb doucment?
[13:01:55] <Derick> maasan: no, only if you add data to it
[13:02:11] <Derick> maasan: and mongodb does reserve some extra space too (the padding factor)
[13:03:16] <maasan> Derick: I have field called description. My first write set null, my second write set long string message. Will it cause the document to move? If padding, how much?
[13:03:50] <Derick> padding starts of with 10% of the document size, but mongodb automatically adjusts this if it sees 10% is not enough
[13:04:18] <Derick> if you run "db.collection.stats()" in the shell, there is a paddingFactor field that tells you
[13:05:57] <maasan> Derick: how document size is calculated? I puts stats on collection. It says 1.0220000000000078. What is it?
[13:06:42] <Derick> which field is that exactly?
[13:06:52] <Derick> it has avdObjSize too, which is the average size
[13:06:59] <maasan> "paddingFactor": 1.0220000000000078,
[13:07:05] <Derick> so it's 2% right now
[13:07:33] <Derick> it's also adjusted when you insert docs... so it's difficult to predict. I wouldn't worry too much about it if you don't have a lot of documents (ie, 100.000s)
[13:08:48] <maasan> Derick: The document is moved based on the size. How the document size is calculated?
[13:09:34] <Derick> according to the length in bytes on disk of each document stored as BSON (and BSON spec is at http://bsonspec.org/#/specification)
[13:10:40] <maasan> Okay, then everytime i write extra bytes. The document size vary, it will cause the document move. I bit worried that document move will cause the performance issue if i have large number of documents
[13:10:59] <Derick> it's only a problem if those documents aren't in memory at that moment
[13:12:07] <maasan> Derick: In realtime, all documents cannot be in memory. am i correct? extra bits write cause document move?
[13:12:23] <Derick> yes, but you make it sound like premature optimisation
[13:12:43] <maasan> Derick: what is that?
[13:13:12] <Derick> what is what?
[13:13:38] <maasan> what you mean premature optimization? d
[13:15:26] <Derick> by trying to optmise things before you have established they are actually a real problem
[13:15:34] <Derick> (and not just a theoretical one)
[13:16:55] <maasan> Okay understood. I am just curious to know how to reduce document move
[13:18:45] <Derick> you can pre-reserve space of course
[13:18:47] <Derick> but that's ugly
[13:18:53] <Derick> and *definitely* a premature optimisation
[13:20:23] <maasan> Okay, Understood Thanks
[14:32:47] <tjsousa> Hi guys!
[14:33:03] <tjsousa> Any luck at installing mongodb in OSX Mavericks DP8?
[14:34:30] <cheeser> still on 10.8 here
[14:35:58] <tjsousa> already tried to install through gcc (as suggested here: https://github.com/mxcl/homebrew/issues/22771)
[14:36:28] <tjsousa> but the apple version of it (brew install apple-gcc42) also fails
[14:38:42] <cheeser> i know there's been some traffic on mongodb-users lately about mavericks
[14:40:06] <tjsousa> thx pointing that out, i'll try there
[14:46:21] <Rhaven> Hello guys, i'm a bit confused with this error that's in mongos log file about moving chunks between shards. http://pastebin.com/dL64GVMx
[14:46:21] <Rhaven> Does mongo can handle this error himself or not?
[14:47:53] <quattr8> I'm sharding a collection on the document _id field, should i create a hashed index on the _id field or not?
[14:52:03] <Scud> hey guys im trying to set up a solutionstack with mongodb and nginx as DB and webserver respectively. unfortunately the nginx module which allows for a direct communication between nginx and mongodb is quiet buggy (code is very old and request are corrupt for responses bigger than a certain size). Could someone be so kind and give me a hint which would be the best setup between the 2 components perfomance-wise. Thanks in advance
[14:55:29] <Rhaven> @quattr8 _id field is an index by default, so when you shard a collection on the _id field you don't need to specified that the field _id is an index. But if the sharding key isn't _id, you should create an index on it with the command .ensureIndex() before.
[15:00:51] <saeedkm> hi , am trying to use ssl for mongod , it is giving error
[15:01:54] <saeedkm> Starting mongod: error command line: unknown option sslOnNormalPorts
[15:01:56] <saeedkm> use --help for help
[15:02:04] <quattr8> Rhaven: I know, I changed from having the _id field as a normal index to having the _id field as a hashed index tho so now I have a normal index and a hashed index
[15:04:33] <rspijker> saeedkm: are you using enterprise mongo or have you built it yourself with ssl enabled? By default mongo does not have SSL support
[15:06:17] <saeedkm> ok , am not using enterprise one
[15:06:28] <saeedkm> how to built it ?
[15:06:37] <saeedkm> build *
[15:08:10] <rspijker> saeedkm: build, as in, build from source
[15:08:26] <saeedkm> ok
[15:09:33] <Rhaven> quattr8: http://docs.mongodb.org/manual/core/sharding-shard-key/#sharding-hashed-sharding. That's depend on the value of the sharded key
[15:09:55] <rspijker> sas
[15:10:04] <rspijker> saeedkm: http://www.dagolden.com/index.php/1711/how-to-build-mongodb-with-ssl-for-linux/
[15:10:13] <Rhaven> quattr8: "Hashed keys work well with fields that increase monotonically like http://docs.mongodb.org/manual/reference/glossary/#term-objectid values or timestamps."
[15:10:27] <rspijker> never used that guide, but it shows up first on google, seems to give some extra info
[15:10:46] <saeedkm> Thank you rspijker
[15:10:49] <rspijker> np
[15:10:55] <ATX_123> can anyone help shed insight as to what logs, locks or other artifacts I should look for to determine if a balance round is hung up?
[15:13:34] <quattr8> Rhaven: Yes i'm using a hashed key since my shard key is an objectid
[15:15:21] <quattr8> since using the hashed key i've noticed my performance has gone down a lot (both insert/update and fineOne)
[15:16:55] <akesterson> Hey guys. I am dealing with the "Config Database String Error", and I'm wondering - we use some DNS magic, so that some parts of our shard talk to each other by different DNS names, but they're in the same order. And yet we started getting this error. Does it matter what the DNS name used is, as long as the actual resolution point for each host in the configdb string is the same?
[15:17:14] <akesterson> e.g., if "A" and "B" both point to the same system, will "--configb A" and "--configdb B" effectively be an exact match?
[15:17:29] <Derick> akesterson: i would advice against doing that
[15:18:00] <akesterson> are there any supporting docs as to why?
[15:18:28] <Derick> maybe... but i wouldn't know where to find them
[15:18:38] <akesterson> k, thanks
[15:21:05] <Rhaven> I'm a bit confused with this error that's in mongos log file about moving chunks between shards. http://pastebin.com/dL64GVMx. Does mongo can handle this error himself or not?
[15:21:58] <Derick> akesterson: all I know is that it creates havoc with clients (f.e. PHP), and "mongos" is just another special client
[15:23:32] <rspijker> akesterson: I remember there being a lot of issues with mixing of localhost and 127.0.0.1… So I would guess your suggestion would also cause issues...
[15:24:09] <rspijker> Rhaven: sounds like something that should resolve itself...
[15:25:06] <rspijker> unless it's been happening over a long period of time (with the same chunk) I would give it a while and see if it sorts itself out
[15:25:15] <Rhaven> rspijker: i hope so :/
[15:35:27] <kurtis> Hey guys -- I have a development sharded+replicated cluster I am working on. I know it's not best-practice but is it possible to run my Config servers (3) on the same hardware as my Mongod nodes?
[15:36:05] <Derick> it's definitely possible
[15:37:47] <kurtis> Derick, sweet. Thanks!
[15:38:21] <Derick> kurtis: I wouldn't run in production though
[15:38:27] <Derick> kurtis: I wouldn't run that in production though
[15:39:51] <kurtis> Derick, absolutely. This is just for local development purposes. We have 4 machines total but I want to take full advantage of the sharding+replication for performance and redundancy. Uptime is less important at this point
[15:42:54] <kurtis> If I ran the Config server(s) on a local, private cloud which would have significantly more latency than these 4 machines to each other -- would that degrade performance?
[15:43:19] <kurtis> (If you can't tell, I'm *really* shooting for performance, haha)
[15:44:08] <Derick> "yes"
[15:44:29] <kurtis> okay cool. I'll try to get the config servers running locally then
[15:45:41] <kurtis> One more question that hopefully *someone* can answer (no response in #hadoop). I want to use the MongoDB Hadoop Adapter. I'm a bit confused on its installation though. It looks like it needs to be installed to the Hadoop servers directly. That, unfortunately, isn't always an option. Also, it says its compatible with elastic map-reduce and I have 0 experience there. Is it possible to just submit the "Adapter" with the Map-Reduce Job to the Hadoop
[15:45:41] <kurtis> Cluster?
[16:00:58] <eucalyptus> I'm using tag aware sharding to emulate data centers in west/east zones. works well. I have additional secondaries that exist cross zones (e.g., a priority 0 eastern secondary in the western zone). Im simulating a network partition with iptables. when i do this, eastern writes from the western zone get queued up someplace. when the partition is resolved they end up where they should. where are these writes cached? mongos?
[16:00:58] <eucalyptus> driver? also, when the partition occurs ALL of the reads fail against either zone (connecting via mongos). we're using 2.4.6, and 2.11.2 of the java driver. fresh ideas appreciated :)
[16:08:00] <platzhirsch> Can I display activity on my MongoDB server somehow?
[16:11:18] <eucalyptus> platzhirsch: MMS?
[16:11:30] <platzhirsch> eucalyptus: good idea, yeah
[16:12:13] <platzhirsch> I just want to see if my process is currently writing to the db :P
[16:12:27] <eucalyptus> db.coll.count()
[16:12:41] <platzhirsch> mongostat maybe
[16:12:49] <platzhirsch> just updated
[16:12:50] <eucalyptus> ya
[16:16:48] <platzhirsch> hm, seems to be the case
[16:18:39] <platzhirsch> yeah, seems to be pretty active, thanks
[16:18:49] <platzhirsch> I am debugging my application and wondered what's going on
[16:19:01] <platzhirsch> apparently it takes a bit time to update 14.000 documents :/
[16:21:19] <platzhirsch> MongoDB why you so slow!
[16:24:21] <eucalyptus> indexes
[16:24:30] <platzhirsch> mhhh
[16:24:50] <platzhirsch> true
[16:24:56] <platzhirsch> I haven't configured any special ones
[16:30:13] <cwf> I upgraded mongovue from 1.5.3 to 1.6.1 and am now getting "Invalid credential for database 'admin'" when trying to connect to a mongo db I was able to connect to from 1.5.3
[16:30:42] <cwf> is this a known issue? I didn't find anything in my google search.
[16:33:22] <kaen> alright guys, I think I'm starting to understand when/why I should use mongodb over mysql, but is there a situation where mysql is the better choice?
[16:34:24] <platzhirsch> kaen: fixed schema?
[16:34:39] <kaen> well, everything I work with is fixed schema
[16:35:07] <kaen> but it's mostly large hierarchies of normalized tables, which afaik is a good candidate for mongodb
[16:35:31] <cwf> kaen, though I'm a mongo nube, we just switched from mongo to mysql for a store for an app that was doing 100s of writes per second. Mongo was churning cpu under that load. Mysql is handling better.
[16:36:16] <kaen> hmm interesting
[16:38:00] <cwf> master had two secondaries so it not only had to update itself, it had to update oplog to pass along to secondaries. I'm not sure what in there did it but we were pegging cpu on an xlarge aws instance.
[16:38:17] <kaen> wow
[16:44:10] <eucalyptus> indexes
[17:22:01] <arussel> I've got this error:
[17:22:01] <arussel> Wed Oct 2 17:20:26.017 [FileAllocator] allocating new datafile /var/lib/mongodb/distscraper.5, filling with zeroes...
[17:22:02] <arussel> Wed Oct 2 17:20:26.018 [FileAllocator] FileAllocator: posix_fallocate failed: errno:28 No space left on device falling back
[17:22:24] <arussel> but, df -Th shows still plenty of space on the device, is this bad config on my side ?
[18:08:45] <saml> mongoimport -h mongodb01 -d contents -c articles --upsert --upsertFields url --file data.json
[18:08:49] <saml> this gets slower over time
[18:08:59] <saml> why?
[18:09:03] <saml> what am i doing wrong?
[18:09:25] <saml> data.json is 25MB 70k lines (entries)
[18:09:33] <saml> i'd expect it to finish in few seconds
[18:09:49] <saml> but it's erally really slow. takes 30 minutes
[18:20:34] <arussel> here; https://github.com/edelight/chef-mongodb
[18:21:05] <arussel> would the mongodb_instance "my_instance" do … configuration, come before or after the inclue_recipe ?
[18:28:26] <jyee> saml: do you have a lot of indexes on your collection?
[18:28:36] <saml> no idea
[18:28:43] <saml> what are indexe
[18:29:02] <saml> db.articles.getIndexes()
[18:29:08] <saml> there's one index on _id_
[18:29:09] <saml> _id
[18:29:34] <jyee> http://docs.mongodb.org/manual/core/indexes-introduction/
[18:29:57] <sreddy> hello
[18:30:24] <sreddy> we are using the mongodb cookbook to build our infrastructure....
[18:30:38] <sreddy> we have a problem we hit and need help...
[18:32:06] <sreddy> need help...
[18:55:09] <eucalyptus> more info?
[18:58:02] <cheeser> and an actual question
[18:59:50] <sreddy> so until about 4 days ago all the roles we had were working fine with out any issue, however we get the following error when try to create an instance with mongos... http://pastebin.pw/h8hybw
[19:47:07] <eucalyptus> i found out the answer to my earlier question. it appears you still need a master to read when there is a network partition. you'd have to mess with votes and priorities to have an orphaned secondary elect itself if it knows about a majority in a disconnected datacenter
[19:47:19] <eucalyptus> s/master/primary
[19:52:13] <sreddy> by network partition i presume you mean different IP subnets...
[19:52:44] <eucalyptus> like an east and west datacenter that lose connectivity to each other
[19:52:58] <sreddy> well, all of our instances are in one location...
[19:53:09] <eucalyptus> ya
[19:53:16] <sreddy> except that they may be in 2 different subnets
[19:56:29] <ATX_123> can anyone help shed insight as to what logs, locks or other artifacts I should look for to determine if a balance round is hung up?
[19:58:33] <eucalyptus> use config; db.locks.find()
[19:59:35] <ATX_123> eucalyptus, i found the lock, but how do i know if it is still truly needed?
[19:59:55] <eucalyptus> is it super old?
[20:00:16] <ATX_123> 12+ hours
[20:00:23] <eucalyptus> yeah, i'd remove it
[20:00:29] <eucalyptus> IMO
[20:00:34] <ATX_123> and I haven't seen logs in the mongos about moving things in 10 hours
[20:00:42] <eucalyptus> remove!
[20:00:50] <ATX_123> what is the worst thing that can happen if it was still needed?
[20:01:02] <eucalyptus> your machines will melt
[20:01:05] <ATX_123> ha
[20:01:34] <eucalyptus> I'm not 100%. if chunks are in flight you may lose that data, but it seems unlikely after 12hrs
[20:01:49] <ATX_123> k. thanks
[20:15:18] <ddasilva> is it possible to do a query like Doc.filter(x==A or x==B)?
[20:16:04] <eucalyptus> yes
[20:16:41] <ddasilva> eucalyptus: would i do this with Doc.filter(x__in=[A, B])?
[20:17:57] <eucalyptus> probably not
[20:18:13] <ddasilva> eucalyptus: how would I do it?
[20:18:43] <eucalyptus> i'd read the mongo query reference
[20:27:35] <jiffe99> don't suppose there's an easy way to get a mongod instance to promote to primary if quorum is lost and there's a single node up
[20:27:59] <jiffe99> this is only temporary, I have 3 nodes and two are offline so I can rsync one to the other
[20:35:54] <eucalyptus> jiffe99: you can modify the replset config and force a reconfigure?
[20:36:17] <cheeser> sounds like a terrible idea :)
[20:36:30] <eucalyptus> talk to 10gen ;)
[20:36:40] <cheeser> 10gen is no more
[20:36:49] <eucalyptus> talk to mongodb.org
[20:36:51] <eucalyptus> http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
[20:36:53] <cheeser> :)
[20:36:56] <eucalyptus> ;)
[20:37:11] <LouisT> 10gen died?
[20:37:26] <cheeser> no we renamed to MongoDB
[20:37:31] <LouisT> oh
[20:37:33] <cheeser> 10gen.com
[20:38:00] <cheeser> yeah, you could just remove those other two RS members.
[20:38:11] <cheeser> that's not as bad as I was envisioning
[20:38:45] <cheeser> i was seeing people trying to manually rsync between servers, etc.
[20:38:52] <eucalyptus> oh
[20:38:56] <eucalyptus> ya, no bueno
[20:39:06] <cheeser> rs.reconfig() is pretty simple, though.
[20:39:25] <joshua> If you learn the javascript to pop out members its easy to reconfigure
[21:04:02] <saml> https://gist.github.com/saml/6800496 what is this? how do I fix?
[21:18:29] <VooDooNOFX> saml, your daemon is getting "signal 6", which is SIGABRT
[22:02:46] <crankharder> can I tell mongod to bind to a particular interface? I don't see anything in the default config
[22:08:14] <redsand_> crankharder: you can use the ip associated with that interface
[23:17:00] <Sjors> Hi all
[23:17:18] <Sjors> I have a collection with an index on [callerId,type]
[23:17:44] <Sjors> with a lot of documents with both callerId and type set
[23:17:58] <Sjors> I just noticed a bug where "type" would be unset
[23:18:08] <Sjors> in my application
[23:18:28] <Sjors> I've fixed the bug in the application, but now I'd like to get my mongo database back in pristine state --- I know how to revive the correct "type" value from the document, but when I do, I get a duplicate key error
[23:18:50] <Sjors> is there a way to update() the table, setting the type to the correct value, possibly ignoring any indices?
[23:31:41] <crudson> Sjors: drop index, correct documents, recreate index. Back up collection first in case you mess up.
[23:32:10] <Sjors> crudson: thanks a lot :)
[23:32:19] <Sjors> crudson: I'm working on a test collection
[23:32:24] <Sjors> a non-live copy