PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 5th of September, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:09] <joannac> you have no shards
[00:00:15] <joannac> where is the database supposed to live?
[00:00:37] <freeone3000> joannac: On the sharding server.
[00:02:51] <joannac> freeone3000: what sharding server?
[00:03:00] <freeone3000> joannac: The one that I'm running commands on.
[00:03:03] <joannac> the mongos server is like a router. it doesn't keep its own data
[00:03:26] <freeone3000> joannac: Okay, so I need to shard *every* database that I intend to route through a mongos?
[00:03:37] <joannac> no
[00:04:01] <joannac> look at this diagram
[00:04:01] <freeone3000> joannac: Then.. I'm confused. If it doesn't store its data, and I don't shard every database, how does it know where to stick the data?
[00:04:05] <joannac> http://docs.mongodb.org/manual/core/sharding-introduction/#sharding-in-mongodb
[00:04:14] <joannac> your data lives on the green machines
[00:04:36] <joannac> if you want to shard a *collection* (not a database), the collection lives across 2 or more green machines
[00:04:44] <freeone3000> joannac: Yeah, did that. But I want a certain collection to always live on a particular machine, in its entirety, and for a given mongos, all requests need to go to that machine.
[00:04:51] <joannac> if you don't want to shard a collection, it fully lives on a single green machine
[00:05:19] <freeone3000> joannac: Okay. Sounds awesome. How do I do that?
[00:05:34] <joannac> you add a shard, and then create your collection
[00:07:23] <freeone3000> joannac: Okay. And it'll just... go to the first shard that I add? Even if I shard other stuff to multiple shards based on a key?
[00:08:24] <joannac> ugh okay, let me go into more detail
[00:08:27] <joannac> i was being lazy
[00:08:46] <joannac> when you first create a database, mongos decides which shard it should "live on"
[00:09:04] <joannac> what that means is that collections in that database always live in that shard
[00:09:31] <joannac> so it means any collections you don't want to shard in the database, all live on that shard
[00:09:58] <freeone3000> Can we tell mongos which shard we want this to be?
[00:10:29] <joannac> yup, http://docs.mongodb.org/manual/reference/command/movePrimary/
[00:10:46] <joannac> do that before you create data
[00:11:08] <freeone3000> Awesome, thanks. Will this run into any issues when having multiple mongos servers that share shards?
[00:11:29] <joannac> "If you use the movePrimary command to move un-sharded collections, you must either restart all mongos instances, or use the flushRouterConfig command on all mongos instances before writing any data to the cluster. This action notifies the mongos of the new shard for the database."
[00:12:08] <freeone3000> Well, yes, but that seems that there's only one collection for the sharded cluster, no matter how many mongoses there are.
[00:12:25] <joannac> I'm not sure what you're asking, sorry
[00:13:48] <freeone3000> My desired setup: We want to run a mongod instance in each region to hold user data. We need to have a different collection in each region for region-based routing info. We want to share user information between regions semi-transparently. So our idea was to shard based on user timezone, and group each into the appropriate zone.
[00:14:18] <joannac> okay
[00:14:27] <freeone3000> So that gives us a shard in each region, and a mongos that happens to know about all of the shards.
[00:14:32] <freeone3000> What do we do for the per-region data?
[00:14:44] <joannac> why do you need to shard for that?
[00:14:49] <joannac> wait
[00:15:03] <joannac> does region 1's data need to be accessed in region 2?
[00:15:06] <freeone3000> joannac: So that we can run every query through mongos instead of maintaining a mongos and a separate address for per-region data.
[00:15:15] <freeone3000> joannac: User data yes, routing data no.
[00:15:19] <freeone3000> joannac: User data is sharded, routing data isn't.
[00:15:27] <joannac> oh
[00:15:29] <joannac> okay
[00:16:17] <joannac> user db should be a replica set
[00:16:33] <freeone3000> Like, just a replica set, across the entire world?
[00:16:34] <joannac> routing data should be a sharded cluster, with tags
[00:17:00] <joannac> freeone3000: well, is there like "per region userdata"
[00:17:16] <joannac> user data that's most often accessed in region 1 and maybe sometimes in region 2?
[00:17:29] <freeone3000> joannac: That'd be all of it, yes. (People don't actually move that much.)
[00:17:36] <joannac> or
[00:17:41] <joannac> then shard both
[00:17:44] <joannac> and tag both
[00:18:05] <joannac> http://docs.mongodb.org/manual/core/tag-aware-sharding/
[00:18:17] <freeone3000> Okay. So there's no way to just go "I'm this mongos, I'm asked for this collection, it lives on this shard"?
[00:18:19] <joannac> btw you shouldn't just be sharding on timezone
[00:18:27] <joannac> shard by timeszone and user_id or something
[00:18:34] <freeone3000> joannac: Yeah, timezone and user_id.
[00:18:43] <freeone3000> joannac: But user_id is a uuid so it's essentially meaningless.
[00:18:54] <joannac> doesn't matter. it's to get granularity
[00:19:11] <joannac> or shard on something you are likely to query on that also has high granularity
[00:19:36] <joannac> freeone3000: you could make 3 different dbs
[00:19:46] <freeone3000> joannac: dbs in the mongo sense, or servers?
[00:19:53] <joannac> region1_data, region2_data, region3_data
[00:20:03] <joannac> and put each of them in a different shard
[00:20:09] <freeone3000> That moves the detecting logic to the client-side. I don't like that.
[00:21:11] <freeone3000> Okay. Using tagged sharding for lookup of the proper records sounds good, thanks.
[00:45:36] <chrismclaughlin> is it possible to query mongo and return documents that meet a subset of the queried attributes?
[00:46:31] <joannac> chrismclaughlin: $or ?
[00:46:57] <joannac> or like : here are 5 predicates, return me a document as long as at least 3 are satisfied?
[03:15:18] <shoerain> joannac: actually, that second one sounds sweet. Should be doable since you can just do a javascript function?
[05:37:05] <dvorakvie> In Mongoose, what's the best way to exclude fields in a returned object from a .save? (ie. excluding the password field in the returned object from newUser.save)
[06:17:24] <zamabe> Hello. Using the nodejs mongodb module. Whenever I call db.users.count(), it returns undefined, but the docs say it should return null. The callback provided to count() is also never actually called.
[06:22:13] <zamabe> The object I'm calling .count() on appears to be a perfectly fine db.collection('users') object
[06:24:35] <zamabe> console.log(db.collection('users').count(function(err, count){console.log('hi');})); -- This gets me 'undefined' but never 'hi'
[06:25:20] <sp1t> maybe you should remove console.log from beginning: db.collection('users').count(function(err, count){console.log('hi');});
[06:25:39] <sp1t> :)
[06:26:22] <zamabe> does nothing whatsoever. The console.log is there to show that it does not perform according to documentation, which says that .count() should return null, which it does not.
[06:27:27] <sp1t> db.collection('users').count(function(err, count){console.log(count);}); and it will print the count to console
[06:27:46] <zamabe> you don't seem to understand what I've already said
[06:27:59] <zamabe> The callback provided to count() is never actually called.
[06:28:05] <zamabe> oh
[06:28:11] <zamabe> you've just joined, my bad
[06:28:19] <zamabe> Hello. Using the nodejs mongodb module. Whenever I call db.users.count(), it returns undefined, but the docs say it should return null. The callback provided to count() is also never actually called.
[06:29:02] <sp1t> yea because you have the db.collection...... inside a console.log() that will alway be undefined
[06:29:14] <zamabe> nope.
[06:29:43] <zamabe> if I don't include the .count() it prints a nice load of stuffs, because that seems to be a valid collection object
[06:30:11] <zamabe> console.log prints whatever is returned from a function, if that's what you put inside of it.
[06:30:28] <zamabe> Which means that it should return null, because that is what the docs say .count() does. This is not the case.
[06:32:35] <sp1t> you want a callback from the count function with the result? am i right?
[06:33:27] <zamabe> I do, that would be nice, that is what I have been attempting to use, I have tried it many times, the thing does. not. work. properly.
[06:34:08] <zamabe> The callback is never called, besides that, the count function does not seem to be operating properly.
[06:34:10] <sp1t> are you using mongoose?
[06:34:15] <zamabe> wat.
[06:34:27] <zamabe> Hello. Using the nodejs mongodb module. Whenever I call db.users.count(), it returns undefined, but the docs say it should return null. The callback provided to count() is also never actually called.
[06:35:31] <zamabe> Am I being unreasonably unclear?
[06:35:52] <sp1t> no, i'm just trying to figure out whats wrong with your code
[06:36:03] <zamabe> the part where mongodb isn't working.
[06:40:16] <sp1t> maybe it's never called because you placed the callback function inside count() try this: db.collection('users').count().exec(function(err, count){ console.log(count); });
[06:43:45] <sp1t> did it work?
[06:44:36] <zamabe> TypeError: Cannot call method 'exec' of undefined
[06:44:58] <sp1t> maybe you should use mongoose, its the official driver: http://docs.mongodb.org/ecosystem/drivers/node-js/
[06:45:02] <zamabe> db.collection('users') is, however, a valid collection objects
[06:45:04] <zamabe> O.o
[06:45:45] <sp1t> when i do this in mongoshell: db.users.count(function(err, count) { return count; }) i get 0 as result
[06:46:04] <sp1t> when i do this db.users.count() i get 2 as result because i have 2 documents inside this collection
[06:46:21] <zamabe> yes, in mongoshell these work fine
[06:46:34] <zamabe> also, mongodb is the official as well
[06:46:50] <zamabe> mongoose is the official ODM
[06:46:56] <sp1t> yea
[06:47:05] <zamabe> yeah, so I'm using the official driver, mongodb.
[06:47:39] <sp1t> thats working for me: db.users.count(function(err, count) {console.log("count = %s", count);});
[06:48:03] <zamabe> wish I were you
[06:48:16] <zamabe> mongodb@1.4.10 here
[06:50:37] <sp1t> i'm using now the native driver: var collection = db.collection('users'); collection.count(function(err, count) {console.log("There are " + count + " records.");});
[07:00:54] <zamabe> sp1t: sigh, can only reproduce in one case, when I'm relying on it ;p
[07:04:15] <sp1t> hm
[07:07:59] <sp1t> weird
[08:28:26] <jordana> Does anyone know if you can specify a primary for a single unsharded collection on creation in Mongo?
[08:28:34] <jordana> mongos*
[08:59:43] <lha13> Hello I've asked the following question on stack overflow (http://stackoverflow.com/questions/25666187/mongodb-nested-group/) and I've received a partial answer. Would someone be able to should me how to limit the results of the array please?
[10:31:50] <Jake232> If I want to check whether field X1 contains Y OR field X2 contains Y OR field X3 contains Y
[10:31:54] <Jake232> How do I go about that?
[10:32:01] <Jake232> (Sorry, first time using mongo)
[10:32:54] <Nodex> y want to know if any are true?
[10:32:57] <Nodex> you*
[10:33:47] <Jake232> I want to return all records where column X1 contains Y OR field X2 contains Y OR field X3 contains Y
[10:33:55] <Jake232> Just found $or
[10:33:58] <Jake232> could be what I nee
[10:34:20] <Nodex> define "contains" ?
[10:34:46] <Nodex> a string in a piece of text?
[10:35:04] <Jake232> sorry, should have used equals
[10:35:09] <Jake232> not contains
[10:35:58] <Nodex> { $or: [ { 'x1':'Y' }, { 'x2':'Y' }, { 'x3':'Y' } ] }
[10:36:20] <Jake232> Perfect, thankyou
[10:36:32] <rspijker> Jake232: $or ?
[10:36:33] <rspijker> $or: [ {“X1”:”Y”},{“X2”:”Y”},{“X3”:”Y”} ]
[10:36:34] <rspijker> assuming you mean contains in the array sense
[10:36:34] <rspijker> of Xi are strings and you want to see if they contain a Y you need to use a regex
[10:36:35] <rspijker> in that case “Y” would become /Y/ or /y/i for case insensitive
[10:40:22] <rspijker> how is this not exactly what I said 2 minutes ago :/
[10:41:43] <Jake232> rspijker: What do you mean what you said two minutes ago?
[10:41:55] <Jake232> That was the first time you spoke since I joined the room
[10:46:01] <Nodex> lmao
[11:04:17] <rspijker> it appears like I might be having some connectivity issues then :)
[12:01:05] <Viesti> agh
[12:03:01] <Viesti> I was switching to directory per db option in an attempt to isolate reads from writes (we'll write to one db and read from another, and after write finishes, switch reading)
[12:03:35] <Viesti> so I restarted a secondary after moving db files to a subdirectory
[12:03:56] <Viesti> but seems that mongo deleted all the files that I moved and started replicating from master
[12:04:15] <Viesti> this was done on a test cluster so I decided to just drop the data
[12:04:54] <Viesti> but then the secondary which had replicated some of the data, decided to proceed with building an index
[12:05:06] <Viesti> for the collection that was deleted...
[12:05:59] <Viesti> so, I tried to stop the background index build with db.killOp()
[12:06:07] <Viesti> but it doesn't work :(
[12:06:17] <Viesti> I'm on 2.6.1
[12:12:08] <Viesti> what would happen if I just kill mongod process...
[12:20:19] <Viesti> is it possible to even stop a background index operation at all?
[12:22:20] <Derick> i don't think so
[12:22:29] <Viesti> :(
[12:22:40] <Nodex> kill current op?
[12:22:50] <Viesti> didn't seem to work
[12:23:30] <Derick> hey kchodorow - that's a while ago!
[12:24:00] <kchodorow> hey Derick, how's it going?
[12:24:06] <Viesti> it just says "attempting to kill op" but doesn't do anything
[12:24:15] <Derick> pretty well kchodorow - still doing the mongo thing :-)
[12:24:22] <Derick> kchodorow: how's the goog?
[12:24:47] <kchodorow> pretty great, lots of free food :)
[12:25:00] <Derick> haha
[12:25:17] <kchodorow> i heard you got married, congratulations!
[12:25:34] <Derick> oh yes, about 2½ months ago - news travels fast :)
[12:28:11] <Nodex> http://docs.mongodb.org/manual/reference/method/db.currentOp/ <--- major javascript error on your website.
[12:28:26] <Nodex> Error: Permission denied to access property 'document' ... stuck in a serious loop
[12:28:36] <Nodex> 3000+ errors and counting LOL
[12:36:50] <jordana> would anyone know why the mongod control scrips are hanging on centos 6.4?
[12:37:17] <jordana> mongo's running fine but the control script hasn't registered it started
[12:47:50] <jordana> Ahh figured it out, the mongod.conf's encoding was skewiff
[13:17:12] <shadfc_> hi, i'm having a problem saving a file to gridfs (via mongoengine+pymongo). I get this error: OperationFailure: command SON([('filemd5', ObjectId('5409b2fc454800472ab3df9f')), ('root', u'fs')]) failed: exception: Can't get runner for query { files_id: ObjectId('5409b2fc454800472ab3df9f'), n: { $gte: 0 } }
[14:06:17] <paresh> hey guys is using "readPrefernce secondary" options in mongodb beneficial
[14:06:34] <shadfc_> if you have a replica set
[14:08:12] <cheeser> though you should use secondaryPreferred if anything.
[14:08:34] <cheeser> secondary reads are subject to staleness, though, so you have to be OK with potentially out of date data.
[14:10:04] <tscanausa1> everything is out of date as soon as you write it
[14:24:38] <geoffb> .
[14:24:40] <geoffb> .
[14:24:40] <geoffb> .
[14:24:41] <geoffb> .
[14:25:00] <zamabe> O.o
[14:29:50] <skot> cheeser, the best option is probably primaryPreferred. Secondaries should never be used without serious thought, and planning, and only on a failover could it be considered reasonable as a default.
[14:29:50] <Nodex> what is time, in a generalistic type of thing
[14:30:59] <cheeser> skot: yes, i know. paresh is the one asking.
[14:32:19] <skot> cheeser, I read your response as use secondaryPref as a general suggestion, which I guess you didn't mean that way.
[14:32:56] <skot> shadfc_, paresh: it is not suggested to read from secondaries generally.
[14:32:56] <cheeser> i probably should've expounded. i started with the "if anything" and forgot to follow up.
[14:34:03] <skot> The few times it might make sense really fall into the category of distributes data centers, failures, and overloaded replica set, as well as the obvious I don't care what quality of data I get (or how stale it is).
[14:36:27] <skot> In general people who read from secondaries get very bad results, for various reasons: overload replica set + failure, unexepected data (think of reading from forward then back in time), and others I can't seem to recall :)
[14:48:55] <paresh> when i do a mongostat on my server i see too many "qr" happening on my primary "qr|qw - queue lengths for clients waiting (read|write)"
[14:49:58] <d0x> Hi, i have a script that uses db.xxx.find(...).forEach(...). That gets execute on only one core. Is there an easy way to "multithread" it? (single instance).
[14:50:03] <paresh> skot: cheeser
[14:50:13] <d0x> I use this script to update some old data
[14:51:10] <skot> d0x: use a programming language with threads :)
[14:51:35] <d0x> skot: I'm executing it inside the shell
[14:51:41] <skot> paresh: best to ask a question, and post data along with it.
[14:51:57] <skot> d0x: yep, don't use the shell it doesn't have threads
[14:52:05] <d0x> okay :(
[15:24:48] <paresh> skot: insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn s et repl time 1443 *0 2 *0 1653 1466|0 0 12g 25.9g 5.51g 0 rawcms_production:85.0% 0 198|0 0|1 2m 7m 252 vm-built io PRI 15:21:05
[15:25:12] <paresh> skot: m getting following output in mongostat command
[15:26:40] <skot> Please use pastebin/etc in the future, and get db.currentOp() from the shell; there is an active write which is causing the reads to queue (note: the aw field) and causing the high write lock %
[15:28:30] <paresh> skot: i have used secondary as read prefernce and this is the output from a primary server
[15:32:03] <paresh> skot: "$msg" : "query not recording (too large)"
[15:32:18] <paresh> getting many such messages
[15:32:41] <skot> please just post the output so we can see the blocking write.
[15:33:16] <paresh> skot: the ouput is too huge
[15:33:40] <skot> It is too big for pastebin/gist/etc?
[15:34:04] <paresh> 1 min
[15:34:10] <skot> *DO NOT* paste into this channel
[15:34:33] <paresh> yes
[15:36:23] <paresh> https://gist.github.com/eca98b161e0e76223b14.git
[15:36:56] <paresh> https://gist.github.com/anonymous/eca98b161e0e76223b14
[15:37:16] <paresh> skot: the second one
[15:41:19] <skot> can you post a few seconds of mongostat output and new currentOp() during the time of the capture?
[15:41:29] <skot> (20-30 seconds should be good)
[15:41:56] <paresh> ok
[15:43:23] <skot> do you use findAndModify by any chance?
[15:43:49] <paresh> yes
[15:45:31] <paresh> https://gist.github.com/anonymous/e0bdd5f6e03176ce4acf
[15:50:39] <leenug> Hi all, I'm trying to get my head around how I would achieve the following in Mongo: retrieve the latest document in a collection for a set of keys. With a bit of context - devices report their temperature to mongo every X seconds, I'd like to retrieve the latest temperature of a given set of devices
[15:50:45] <leenug> any pointers would be appreciated!
[15:53:23] <jordana> leenug. Are you just querying for the latest temperature for device ID's effectively?
[15:53:53] <leenug> yes exactly, but only a subset of devices
[15:54:16] <jordana> How many at a time? Can they be grouped?
[15:54:25] <leenug> my document looks like: http://pastie.org/9529714 if that helps at all
[15:54:38] <leenug> its variable, could be 2 could be 50, and it would be by serial number
[15:55:05] <jordana> You could use $in
[15:55:46] <leenug> I see, but how would I ensure that the latest document per serial is returned?
[15:55:47] <paresh> skot: Any help
[15:55:48] <jordana> find { serial: { $in: ['XX530C', 'SOMETHING'] }}
[15:56:04] <jordana> Ahh
[15:56:04] <jordana> well
[15:56:39] <jordana> Without detering from Mongo, you could store them in mongo and have the latest always in redis
[15:57:00] <jordana> your update could update a serial number key in redis and you query that for the latest
[15:57:05] <leenug> yeah, I did think about something similar
[15:57:47] <jordana> alternatively, if you can change your document structure you might be able to achieve something
[15:57:53] <feathersanddown> Hi, I have a unique document that have an array of subdocuments, I need to return the same document but array must be sorted by a field (date field) is possible with mongodb ?
[15:58:53] <jordana> leenug, whatever your working on looks fun though!
[15:58:56] <leenug> The mysql version of the system achieves this using: http://pastie.org/9529723
[15:59:24] <leenug> Yeah its a lot of fun, we're just getting way too much data for our current implementation so trying to migrate to mongo
[15:59:28] <feathersanddown> I mean... Document{ array: [ {date:"01/01/2000"}, {date:"01/01/2001"}, {date:"01/01/2002"}, {date:"01/06/2002"} .... ] } ....
[16:00:12] <leenug> the more I look into it, the more I think I need to read up on mapreduce
[16:00:25] <jordana> leenug: but mapreduce scans the entire collection?
[16:00:32] <feathersanddown> to this: Document{ array: [ {date:"01/06/2002"}, {date:"01/01/2002"}, {date:"01/01/2001"}, {date:"01/01/2000"} .... ] } ....
[16:00:32] <jordana> leenug: It's not what you need for realtime stuff
[16:00:40] <feathersanddown> is possible ?
[16:00:47] <leenug> I thought you could then filter a mapreduce query
[16:00:56] <leenug> (I'm very new to this btw)
[16:02:23] <jordana> leenug: yes but even still it'll be slow, especially if you say you're taking in too much data right now. M/R is for batch processing large amounts of data in batch
[16:02:54] <leenug> Ah, ok that makes sense
[16:05:06] <leenug> May just go with the reddis approach, or something similar for now thanks for your help!
[16:08:00] <jordana> leenug: no worries. yeah I would go with redis. it's super.
[16:08:50] <chrismclaughlin> @joannac yes like here are 5 predicates, return a document that meets atleast 3
[16:51:42] <feathersanddown> Hi, can someone help me please.....
[16:51:47] <feathersanddown> I have a unique document that have an array of subdocuments, I need to return the same document but array must be sorted by a field (date field) is possible with mongodb ?
[16:51:51] <feathersanddown> I mean... Document{ array: [ {date:"01/01/2000"}, {date:"01/01/2001"}, {date:"01/01/2002"}, {date:"01/06/2002"} .... ] } ....
[16:51:53] <feathersanddown> to this: Document{ array: [ {date:"01/06/2002"}, {date:"01/01/2002"}, {date:"01/01/2001"}, {date:"01/01/2000"} .... ] } ....
[16:51:55] <feathersanddown> is possible ?
[16:52:13] <feathersanddown> sort an array of subdocuments
[16:58:47] <jordana> hey feathersanddown, not with a query, perhaps with the aggregation framework but you'd have to do it application level if not
[16:58:55] <jordana> you're best off ordering them as they go into the array
[17:14:38] <feathersanddown> f444444444444444444444444ck
[17:14:41] <feathersanddown> thanks :)
[17:18:08] <skot> You can keep the array sorted on update so it is pre-sorted.
[17:18:28] <skot> See the docs on $push + $sort for an update
[17:20:16] <skot> ^ feathersanddown
[17:20:50] <feathersanddown> uhm....
[17:31:28] <feathersanddown> something like this? http://docs.mongodb.org/manual/reference/operator/query-modifier/
[17:41:21] <skot> No, like this: http://docs.mongodb.org/manual/reference/operator/update/sort/#up._S_sort
[17:44:50] <feathersanddown> but java driver can ensure me that ordered data from database is readed in the same way?
[17:47:56] <skot> yes, arrays are ordered.
[18:47:50] <arussel> is there a way to slice an array giving a condition ? (ie, remove all element where ele > 0) ?
[18:48:35] <arussel> hmm, {$pull: {ele: {$gt: 0}}} ?
[18:51:09] <arussel> but what if my array is: [{a: 1, ele: -1}, {a:2, ele: 1}] and i want to remove all obj where ele > 0 so to have [{a: 1, ele: -1}] ?
[18:52:53] <arussel> $elemMatch :-)
[19:10:40] <skot> no, just $pull + query, no need for $elemMatch.
[19:10:52] <skot> The query is based on the element in the array.
[19:11:51] <skot> If you post your doc, and update statement in pastie/etc it would help to verify you have the correct syntax.
[19:23:59] <arussel> skot: I'm doing something very similar to the last example of http://docs.mongodb.org/manual/reference/operator/update/pull/
[20:30:34] <feathersanddown> someone knows a gui client for mongo ?
[20:55:34] <blizzow> feathersanddown: robomongo
[20:56:23] <blizzow> Not to say that it's a good client, but I haven't found anything better.
[20:57:15] <feathersanddown> i've downloaded toad cloud
[20:58:56] <blizzow> feathersanddown: http://bit.ly/1quh6Ev
[21:06:32] <freeone3000> What do I do if I want every collection for a record available across every shard, in a sharded server? Similar to how a replset would work?
[21:08:19] <magglass2> freeone3000: if you access the cluster through the mongos, then all data is made available; there wouldn't be a reason to have the same data on multiple shards
[21:08:35] <freeone3000> magglass2: The shards aren't all on the same continent.
[21:11:56] <magglass2> freeone3000: you can use shard tags to segment data so that certain data lives on certain shards that are in different physical locations base on where it will be most-frequently accessed, but if you need the same data in multiple places, then you'd want to split a replset between regions
[21:26:37] <blizzow> I have a member of a replica set that fell about 35000 seconds behind this morning. I rebooted it, and it's been stuck in STARTUP2 phase for the last few hours. NTP shows it's time is accurate. Now the lag has creeped up to about 55000 seconds. I see connections happening in the log. Can I get any more info about what this member is doing while it's in this state? I feel like it should have started catching up as a secondary by now. The replica set
[21:29:16] <blizzow> Also in my log I do see about every 3-5 minutes a message with " serverStatus was very slow" Is there a way to find out which of the servers it's talking to is slow?
[21:37:36] <freeone3000> magglass2: Okay, thanks.
[23:50:20] <zamabe> Can I get a bit of help with this issue? https://gist.github.com/zamabe/ed8ed875daec05f3a26a It's part of a bigger problems I'm having with the nodejs mongodb module in a specific situation I'm having trouble replicating
[23:52:07] <zamabe> If I leave out the options (note that I'm pulling these from the call I'm having trouble with), the command works.