[08:53:21] <remonvv> Anyone any theories on why a group of mongod processes in a sharding cluster indefinitly causes permanent write locks on a database that is heavily queried while chunks are being rebalanced?
[10:40:53] <bertodsera> how do I convert back BinData(3,"XXitNepfEeGhg3BWgbKcRw==") to a "usual" uuid?
[11:19:15] <Derick> ATX_123: it's always better to just ask your question. People will answer when/if they have time
[11:20:24] <ATX_123> I have a cluster that I fear might be stuck on a balancing round. what should i look for in the various place to determine if the lock is vestigial?
[11:21:08] <ATX_123> and if the lock turns out to be vestigial, can I just remove it from the config server?
[11:25:17] <ATX_123> as a followup. the reason i think it might be hung is that the mongos that kicked it off (and has the lock) was restarted prior to the completion. I am not seeing any "Balancer" logs there for quite some time
[11:33:26] <clarkk> can someone tell me whether I should get tab completion when using the shell, to complete the collection name when I type the command "use my-collection-name...." ?
[11:34:01] <ATX_123> clarkk: may be version specific, but I don't
[11:34:02] <clarkk> ie if I type... use my-coll<TAB> to complete the word my-collection-name
[11:34:35] <clarkk> ATX_123: I am using version 2.4.6. I just upgraded from the version shipped with ubuntu, 2.0.1
[11:34:46] <clarkk> ATX_123: neither seem to have this.
[11:35:26] <clarkk> ATX_123: it would be such a useful feature
[11:38:38] <clarkk> ATX_123: apparently it should support it http://docs.mongodb.org/manual/faq/mongo/#does-the-mongo-shell-support-tab-completion-and-other-keyboard-shortcuts
[11:42:19] <clarkk> what does "Type "it" for more" mean?
[11:46:50] <ATX_123> can anyone help shed insight as to what logs, locks or other artifacts I should look for to determine if a balance round is hung up?
[11:47:03] <clarkk> Derick: hmm, I quit the mongo shell and then went back into it while my term was full screen, and it still page the results
[11:48:31] <Derick> clarkk: hmm, file a bug report for that then please
[11:49:33] <clarkk> Derick: I will have to come back to that, I'm afraid. I have only just started with mongo, and so I don't know what is expected and what isn't. I need to get up to speed first
[11:50:26] <Derick> clarkk: if you think something doesn't work like you expect, Stack Overflow is a good choice to ask questions on (As well)
[11:52:42] <clarkk> I am trying to remove documents where the name field is empty. I've tried db.collection.remove({name:null}) and db.collection.remove({name:""}) but neither work. Could someone tell me what query I need to use, please?
[11:53:43] <remonvv> @Derick: Do you, or anyone near you, know what the best way forward is to report a rather major issue where a mongod is spinning with 150-700% writelock continuously? What logs are relevant? We have in that state now but it's hard to reproduce.
[11:54:11] <Derick> remonvv: the support team might know better. here in the LDN office it's all sales
[12:03:50] <Derick> remonvv: I know you do. I can't help you with this though :-/
[12:04:32] <clarkk> ugh, I realise that this is very basic, but please would someone help me? I'm pretty sure it worked before I upgraded. This is the doc: { "name" : "", "email" : "", "age" : null, "_id" : ObjectId("524b488d9543e3cb3200001d") }
[12:04:46] <clarkk> I am trying to find it based on the name field being empty
[12:04:54] <remonvv> Derick: It's cool ;) We'll investigate further ourselves.
[12:05:07] <clarkk> neither {name:""} or {name:null} do not work
[13:01:35] <lessless> what driver I want to use with ruby - Mongoid or Moped or something else?
[13:01:37] <maasan> I studied that MongoDB document moved if size increases. When it happens? Everytime we write the mongodb doucment?
[13:01:55] <Derick> maasan: no, only if you add data to it
[13:02:11] <Derick> maasan: and mongodb does reserve some extra space too (the padding factor)
[13:03:16] <maasan> Derick: I have field called description. My first write set null, my second write set long string message. Will it cause the document to move? If padding, how much?
[13:03:50] <Derick> padding starts of with 10% of the document size, but mongodb automatically adjusts this if it sees 10% is not enough
[13:04:18] <Derick> if you run "db.collection.stats()" in the shell, there is a paddingFactor field that tells you
[13:05:57] <maasan> Derick: how document size is calculated? I puts stats on collection. It says 1.0220000000000078. What is it?
[13:07:33] <Derick> it's also adjusted when you insert docs... so it's difficult to predict. I wouldn't worry too much about it if you don't have a lot of documents (ie, 100.000s)
[13:08:48] <maasan> Derick: The document is moved based on the size. How the document size is calculated?
[13:09:34] <Derick> according to the length in bytes on disk of each document stored as BSON (and BSON spec is at http://bsonspec.org/#/specification)
[13:10:40] <maasan> Okay, then everytime i write extra bytes. The document size vary, it will cause the document move. I bit worried that document move will cause the performance issue if i have large number of documents
[13:10:59] <Derick> it's only a problem if those documents aren't in memory at that moment
[13:12:07] <maasan> Derick: In realtime, all documents cannot be in memory. am i correct? extra bits write cause document move?
[13:12:23] <Derick> yes, but you make it sound like premature optimisation
[14:35:58] <tjsousa> already tried to install through gcc (as suggested here: https://github.com/mxcl/homebrew/issues/22771)
[14:36:28] <tjsousa> but the apple version of it (brew install apple-gcc42) also fails
[14:38:42] <cheeser> i know there's been some traffic on mongodb-users lately about mavericks
[14:40:06] <tjsousa> thx pointing that out, i'll try there
[14:46:21] <Rhaven> Hello guys, i'm a bit confused with this error that's in mongos log file about moving chunks between shards. http://pastebin.com/dL64GVMx
[14:46:21] <Rhaven> Does mongo can handle this error himself or not?
[14:47:53] <quattr8> I'm sharding a collection on the document _id field, should i create a hashed index on the _id field or not?
[14:52:03] <Scud> hey guys im trying to set up a solutionstack with mongodb and nginx as DB and webserver respectively. unfortunately the nginx module which allows for a direct communication between nginx and mongodb is quiet buggy (code is very old and request are corrupt for responses bigger than a certain size). Could someone be so kind and give me a hint which would be the best setup between the 2 components perfomance-wise. Thanks in advance
[14:55:29] <Rhaven> @quattr8 _id field is an index by default, so when you shard a collection on the _id field you don't need to specified that the field _id is an index. But if the sharding key isn't _id, you should create an index on it with the command .ensureIndex() before.
[15:00:51] <saeedkm> hi , am trying to use ssl for mongod , it is giving error
[15:02:04] <quattr8> Rhaven: I know, I changed from having the _id field as a normal index to having the _id field as a hashed index tho so now I have a normal index and a hashed index
[15:04:33] <rspijker> saeedkm: are you using enterprise mongo or have you built it yourself with ssl enabled? By default mongo does not have SSL support
[15:06:17] <saeedkm> ok , am not using enterprise one
[15:09:33] <Rhaven> quattr8: http://docs.mongodb.org/manual/core/sharding-shard-key/#sharding-hashed-sharding. That's depend on the value of the sharded key
[15:10:13] <Rhaven> quattr8: "Hashed keys work well with fields that increase monotonically like http://docs.mongodb.org/manual/reference/glossary/#term-objectid values or timestamps."
[15:10:27] <rspijker> never used that guide, but it shows up first on google, seems to give some extra info
[15:10:55] <ATX_123> can anyone help shed insight as to what logs, locks or other artifacts I should look for to determine if a balance round is hung up?
[15:13:34] <quattr8> Rhaven: Yes i'm using a hashed key since my shard key is an objectid
[15:15:21] <quattr8> since using the hashed key i've noticed my performance has gone down a lot (both insert/update and fineOne)
[15:16:55] <akesterson> Hey guys. I am dealing with the "Config Database String Error", and I'm wondering - we use some DNS magic, so that some parts of our shard talk to each other by different DNS names, but they're in the same order. And yet we started getting this error. Does it matter what the DNS name used is, as long as the actual resolution point for each host in the configdb string is the same?
[15:17:14] <akesterson> e.g., if "A" and "B" both point to the same system, will "--configb A" and "--configdb B" effectively be an exact match?
[15:17:29] <Derick> akesterson: i would advice against doing that
[15:18:00] <akesterson> are there any supporting docs as to why?
[15:18:28] <Derick> maybe... but i wouldn't know where to find them
[15:21:05] <Rhaven> I'm a bit confused with this error that's in mongos log file about moving chunks between shards. http://pastebin.com/dL64GVMx. Does mongo can handle this error himself or not?
[15:21:58] <Derick> akesterson: all I know is that it creates havoc with clients (f.e. PHP), and "mongos" is just another special client
[15:23:32] <rspijker> akesterson: I remember there being a lot of issues with mixing of localhost and 127.0.0.1… So I would guess your suggestion would also cause issues...
[15:24:09] <rspijker> Rhaven: sounds like something that should resolve itself...
[15:25:06] <rspijker> unless it's been happening over a long period of time (with the same chunk) I would give it a while and see if it sorts itself out
[15:35:27] <kurtis> Hey guys -- I have a development sharded+replicated cluster I am working on. I know it's not best-practice but is it possible to run my Config servers (3) on the same hardware as my Mongod nodes?
[15:38:21] <Derick> kurtis: I wouldn't run in production though
[15:38:27] <Derick> kurtis: I wouldn't run that in production though
[15:39:51] <kurtis> Derick, absolutely. This is just for local development purposes. We have 4 machines total but I want to take full advantage of the sharding+replication for performance and redundancy. Uptime is less important at this point
[15:42:54] <kurtis> If I ran the Config server(s) on a local, private cloud which would have significantly more latency than these 4 machines to each other -- would that degrade performance?
[15:43:19] <kurtis> (If you can't tell, I'm *really* shooting for performance, haha)
[15:44:29] <kurtis> okay cool. I'll try to get the config servers running locally then
[15:45:41] <kurtis> One more question that hopefully *someone* can answer (no response in #hadoop). I want to use the MongoDB Hadoop Adapter. I'm a bit confused on its installation though. It looks like it needs to be installed to the Hadoop servers directly. That, unfortunately, isn't always an option. Also, it says its compatible with elastic map-reduce and I have 0 experience there. Is it possible to just submit the "Adapter" with the Map-Reduce Job to the Hadoop
[16:00:58] <eucalyptus> I'm using tag aware sharding to emulate data centers in west/east zones. works well. I have additional secondaries that exist cross zones (e.g., a priority 0 eastern secondary in the western zone). Im simulating a network partition with iptables. when i do this, eastern writes from the western zone get queued up someplace. when the partition is resolved they end up where they should. where are these writes cached? mongos?
[16:00:58] <eucalyptus> driver? also, when the partition occurs ALL of the reads fail against either zone (connecting via mongos). we're using 2.4.6, and 2.11.2 of the java driver. fresh ideas appreciated :)
[16:08:00] <platzhirsch> Can I display activity on my MongoDB server somehow?
[16:24:56] <platzhirsch> I haven't configured any special ones
[16:30:13] <cwf> I upgraded mongovue from 1.5.3 to 1.6.1 and am now getting "Invalid credential for database 'admin'" when trying to connect to a mongo db I was able to connect to from 1.5.3
[16:30:42] <cwf> is this a known issue? I didn't find anything in my google search.
[16:33:22] <kaen> alright guys, I think I'm starting to understand when/why I should use mongodb over mysql, but is there a situation where mysql is the better choice?
[16:34:39] <kaen> well, everything I work with is fixed schema
[16:35:07] <kaen> but it's mostly large hierarchies of normalized tables, which afaik is a good candidate for mongodb
[16:35:31] <cwf> kaen, though I'm a mongo nube, we just switched from mongo to mysql for a store for an app that was doing 100s of writes per second. Mongo was churning cpu under that load. Mysql is handling better.
[16:38:00] <cwf> master had two secondaries so it not only had to update itself, it had to update oplog to pass along to secondaries. I'm not sure what in there did it but we were pegging cpu on an xlarge aws instance.
[18:59:50] <sreddy> so until about 4 days ago all the roles we had were working fine with out any issue, however we get the following error when try to create an instance with mongos... http://pastebin.pw/h8hybw
[19:47:07] <eucalyptus> i found out the answer to my earlier question. it appears you still need a master to read when there is a network partition. you'd have to mess with votes and priorities to have an orphaned secondary elect itself if it knows about a majority in a disconnected datacenter
[19:53:16] <sreddy> except that they may be in 2 different subnets
[19:56:29] <ATX_123> can anyone help shed insight as to what logs, locks or other artifacts I should look for to determine if a balance round is hung up?
[19:58:33] <eucalyptus> use config; db.locks.find()
[19:59:35] <ATX_123> eucalyptus, i found the lock, but how do i know if it is still truly needed?
[23:18:28] <Sjors> I've fixed the bug in the application, but now I'd like to get my mongo database back in pristine state --- I know how to revive the correct "type" value from the document, but when I do, I get a duplicate key error
[23:18:50] <Sjors> is there a way to update() the table, setting the type to the correct value, possibly ignoring any indices?
[23:31:41] <crudson> Sjors: drop index, correct documents, recreate index. Back up collection first in case you mess up.