PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 26th of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:36:29] <Guest76149> Hi, I'm having trouble linking documents in the mongo shell and was wondering if I could get some help? I have an owner document and an asset document. When I insert an asset I would like to add an attribute OwnerId set to an id from an owner doc I query. Any ideas?
[02:07:47] <IAD1> A bunch of RDBMS guys walk into a NoSQL bar. But they don't stay long. They could not find a table.
[02:12:35] <rossdm> <slow clap>
[02:38:04] <Guest92855> Hi, I'm having trouble linking documents in the mongo shell and was wondering if I could get some help? I have an owner document and an asset document. When I insert an asset I would like to add an attribute OwnerId set to an id from an owner doc I query. Any ideas?
[02:39:50] <crudson> Guest92855: I would suggest what you describe :) Set an attribute to owner._id
[02:44:30] <Guest92855> How would I go about that though? What I have tried so far is in my insert, $set that new field to the result of a query. However the query returns {_id : ObjectId('somestring'}. how can I have it return only the ObjectId without the field name?
[02:48:26] <Guest92855> This is my insert: db.description.update({Brand : "Lenovo"}, {$set : {OwnerId : db.owner.find({Name : "CPU Guyz"}, {_id : 1})}})
[02:48:50] <crudson> find -> findOne()._id
[02:49:14] <crudson> note that this is not an atomic operation but two queries
[02:53:35] <Guest92855> I see, looks like that worked. Thanks! Can you explain to me what the two queries are though?
[03:07:01] <crudson> 1) update 2) find - you just happen to be putting them in one line
[03:07:20] <crudson> or rather 1) find and 2) update in the order that they are being executed
[03:09:05] <Guest92855> Alright, I see. Thank you!
[03:10:56] <crudson> cool beans
[04:08:30] <SirFunk> this may be a silly question. is there any way to extract the timestamp from an oid in json?
[04:46:47] <IAD1> SirFunk: if you need a timestamp - just create a new field for it
[04:47:40] <SirFunk> yeah... the problem I am running into is that I am using mongolab and creating records with REST and it doesn't have a way to submit Timestamp()
[04:47:55] <SirFunk> I could create one on the client side and store it as text... but that's kinda icky.
[04:48:06] <SirFunk> i figured if it was already part of the id.. then... yay
[05:52:18] <MongoDBIdiot> and it was a great day
[07:31:58] <[AD]Turbo> hola
[07:33:04] <Gargoyle> Mornin'
[08:17:05] <esc> hi, i have a question about compression
[08:17:12] <esc> is the BSON automatically compressed?
[08:17:32] <esc> or should i gzip my documents before putting them into my collection
[08:17:50] <MongoDBIdiot> no, it is not compressed
[08:18:32] <esc> okay, so if i have large documents, i need to gzip them myself, right?
[08:18:39] <NodeX> yes
[08:24:43] <fredix> hi
[08:24:55] <NodeX> hi
[08:26:07] <MongoDBIdiot> hi
[08:28:47] <IAD1> hi
[08:34:47] <fredix> I have a problem when I try to transform a bsonelement to bsonobj : BSONObj toto = t_payload.Obj();
[08:35:05] <fredix> I get this error : terminate called after throwing an instance of 'mongo::UserException'
[08:35:05] <fredix> what(): invalid parameter: expected an object (data)
[08:35:30] <fredix> t_payload contain : data: "{ contentid: 110471, value: 0, usageid: "ksoJ1xCbgdS4YiimYNVOSnOotBXS0q2S", event: "CONN", sessiontime: 140212801204128 }"
[08:35:53] <fredix> a data field with a string
[08:41:26] <fredix> maybe data is a reserved word
[08:46:07] <MongoDBIdiot> are you talking to yourself?
[08:47:11] <ppetermann> i don't think data is a reserved word
[08:50:22] <ndee> hi there, I have a collection of different search queries and I would like to create reports. http://pastebin.com/5PfTHkTP <-- that's a default document. My query for getting the data at the moment is as follows: http://pastebin.com/UbDehWWc <-- That query takes a couple of seconds since there are around 800'000 documents. What would be the best way to speed that up?
[08:52:03] <ppetermann> make?
[08:53:18] <ppetermann> ndee: why using a unix timestamp, and not a date field? also, have you set your indexes right?
[08:54:05] <ndee> ppetermann, I'm used to unix timestamps but yes, I could change that to a date field. I have an index like this: db.queries.ensureIndex({timestamp:1, searchMake:1});
[08:54:19] <fredix> ppetermann: any idea ?
[08:58:04] <ndee> ppetermann, would it be faster to change that to a find({searchMake: 222, timestamp: 292349234}).count() and iterate over the timestamps and searchMakes ?
[09:02:41] <NodeX> fredix: is that your exact string?
[09:02:48] <NodeX> data: "{ contentid: 110471, value: 0, usageid: "ksoJ1xCbgdS4YiimYNVOSnOotBXS0q2S", event: "CONN", sessiontime: 140212801204128 }"
[09:03:02] <NodeX> because that's invalid json
[09:03:19] <NodeX> unless you're actually storing that object
[09:04:04] <ppetermann> fredix: no, im sorry
[09:04:29] <ppetermann> ndee: personally i'd build a map/reduce there
[09:04:47] <ppetermann> but that isn't necessary faster, but it can scale over increments
[09:05:22] <ndee> ppetermann, I added a separate index for timestamp and searchMake, that brought the query down to 3 seconds
[09:05:26] <ndee> from 7.5 seconds
[09:07:34] <NodeX> how was the index before ndee?
[09:07:51] <NodeX> because a single compound index correctly added would suffice
[09:07:55] <ndee> NodeX, no index, only one on {searchMake : 1, timestamp: 1}
[09:08:58] <NodeX> you shoudlv'e swapped them around in your aggregation and used that compound ^
[09:11:13] <yuriy> hello everyone
[09:14:02] <ndee> NodeX, ok, to be clean, I dropped all indexes and added the following one: db.queries.ensureIndex(searchMake:1, timestamp:1). But that didn't speed up following query: db.queries.aggregate({$group:{ _id : { searchMake: '$searchMake', timestamp: '$timestamp'}, queriesPerMake : { $sum : 1}}});
[09:15:21] <yuriy> after mongo 2.2 server dirty shutdown I'm getting ""expected to be write locked for" errors when doing MapReduce. fallback to mongo 2.0 doesn't produce such errors. can you point out where I should look at to fix this issue with 2.2?
[09:17:21] <ndee> NodeX, I added an index on searchMake and a db.queries.distinct("searchMake") and it takes 1018ms. I know that there are 800'000 records but that is pretty long IMO.
[09:18:16] <IAD1> ndee: you can create an index on "searchMake"
[09:18:36] <ndee> IAD1, that's what I did. I do have an index on "searchMake"
[09:20:14] <IAD1> ndee: how much RAM has your server? It should be enough for the indixes, at least
[09:21:28] <ppetermann> ndee: what does explain say?
[09:21:36] <ndee> IAD1, the server has 16GB of RAM, although I have to say it's a default mongo installation without any "tweaks" on ubuntu 10.04
[09:24:19] <ndee> ppetermann, can't use .explain on .distinct
[09:26:51] <NodeX> distinct on large datasets is always slow
[09:27:02] <NodeX> it's the building of the array that takes the time, not the query
[09:28:28] <ndee> NodeX, of the result array?
[09:28:37] <NodeX> yep
[09:28:50] <NodeX> tail your log and see for yourself
[09:29:18] <ndee> NodeX, command: { distinct: "queries", key: "searchMake", query: {} } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) r:2014767 reslen:569 1015ms
[09:29:30] <ndee> NodeX, that's from the mongodb.log, the result array is quite small
[09:30:46] <NodeX> :)
[09:31:51] <ndee> NodeX, hm
[09:34:09] <NodeX> ?
[09:35:38] <ndee> NodeX, so I would have to spread it up over multiple machines to optimize the queries?
[09:36:08] <ndee> NodeX, so basically, for aggregate functions, etc., it's better to still use a sql-db?
[09:36:14] <NodeX> which one are we talking about now?
[09:36:21] <ndee> NodeX, sorry, for the distinct query.
[09:36:23] <NodeX> aggregation or distinct
[09:36:52] <NodeX> see how fast aggregation does the distinct
[09:36:56] <MongoDBIdiot> .
[09:37:58] <ndee> NodeX, you mean with map/reduce?
[09:38:37] <NodeX> no wiht the aggregation framework
[09:38:44] <NodeX> with*
[09:42:00] <ndee> NodeX, db.queries.aggregate({$group: {_id: "$searchMake"}}); takes 7 seconds whereas db.queries.distinct('searchMake'); takes 1 second.
[09:44:43] <NodeX> then 1 second is as fast as it will get
[09:47:44] <ndee> NodeX, wow, that is kinda dissapointing :(
[09:48:28] <ndee> I guess the right tool for the right job is still true :D
[09:48:44] <ron> umm, what NodeX may have meant to say was "then 1 second is as fast as it will get with your current setup".
[09:49:18] <ndee> ron, yeh, but still. I'm gonna test it with a mysql database
[09:49:43] <NodeX> ndee : right tool for the right job is true :)
[09:49:59] <NodeX> if sql fits your aggregation needs better then use it :D
[09:50:26] <ndee> that's what I'm gonna do :D I was just so happy with the schemaless setup :D
[09:51:38] <MongoDBIdiot> yes, using tools without knowing what you are doing
[09:52:00] <NodeX> it doesn't mean that you have to use MongoDB for everything or *SQL for everything, plenty of people use sql / mongo for parts of their app
[09:52:28] <NodeX> and dont listen to MongoDBIdiot - he is retarded and doesn't have a clue what he is doing :)
[09:52:40] <NodeX> and is a general menace to this channel
[09:52:43] <ron> I think he's an idiot.
[09:52:47] <NodeX> true story
[09:52:49] <MongoDBIdiot> don't listen to NodeX, he's the most incompetent MongoDB jerk on earth
[09:52:53] <NodeX> MongoDBIdiot == MacYET
[09:53:02] <ron> haha
[09:53:04] <NodeX> ron he is
[09:53:10] <NodeX> we already ascertained this
[09:53:17] <ron> seriously? damn that's a bored guy.
[09:53:22] <NodeX> I know right
[09:53:43] <ron> ah well, whatcha gonna do.
[09:53:45] <NodeX> such a child and must have time to waste to come in here and troll
[09:53:58] <NodeX> he'll get banned again pretty soon I imagine
[09:53:59] <ron> have you seen his picture? :p
[09:54:05] <NodeX> yer, he is one Ugly dude
[09:54:09] <ron> haha
[09:54:20] <ron> :(
[09:54:20] <NodeX> LOL
[09:54:23] <ron> ;)
[09:54:25] <ndee> I guess a mix of SQL and NoSQL is the way to go.
[09:54:45] <NodeX> ndee : 10gen are always improving performance
[09:54:57] <NodeX> so in the future you might get better results
[09:55:10] <ndee> I think I found the answer to everything right now. In the end, a mix is nearly always the best solution.
[09:55:15] <ron> ndee: the solution can be as simple as adjusting your data model, but unfortunately I didn't follow the whole chat, so I can't advice any further.
[09:55:21] <ndee> You cannot always be nice but also not always be bad.
[09:55:42] <NodeX> personaly for me I adjust my data model so I have less in my stack
[09:55:56] <ndee> ron, yes, I'm thinking about that too now, I come from the sql-world so the data model might not be the best :D
[09:55:57] <NodeX> but that's how my apps are suited, it's probably not the same for you
[09:56:39] <ron> ndee: most people come from the sql world before as that what's been there. the concept of properly modeling data is a known issue during the transition phase.
[09:57:28] <NodeX> best advice is to forget everything you know about SQL if you want to adopt a schemaless way of attacking a problem
[09:57:31] <ndee> actually, what I wanna do is the following: analyze a apache log file and that log file contains the search queries. So I extract the data and put it into a mongodb to analyze the data.
[09:57:51] <NodeX> hadoop is your best way ndee
[09:58:02] <ron> BRING OUT THE BIG GUNS!
[09:58:04] <NodeX> there are already scripts for this
[09:58:05] <ron> ;)
[09:58:09] <ndee> :D
[09:58:18] <NodeX> loggly use hadoop then mongo to store the result
[09:58:20] <NodeX> +s
[09:58:27] <ndee> I mean, the dataset is pretty small
[09:58:43] <ron> okay, I'm hungry.
[09:58:55] <NodeX> go eat then :P
[09:59:06] <ndee> there are around 30'000 searches a day.
[09:59:19] <ndee> I'm always a little bit afraid of hadoop, I mean... that's something that CERN uses :D
[09:59:32] <ron> haha, just noticed the title on twitter for MacYet ;) - https://twitter.com/MacYET
[10:00:23] <NodeX> ndee : in my app I have a "history" collection which is basically the same and I aggregate it
[10:00:27] <NodeX> but I do it in sections
[10:00:57] <ndee> NodeX, I mean, I could also compute the data daily and then just store the results, which is what I might do.
[10:01:54] <NodeX> I do it hourly and upsert a collection with daily data
[10:01:59] <NodeX> if that makes sense
[10:02:05] <ndee> it does
[10:02:21] <NodeX> 6 aggregations take less than 5 seconds
[10:02:35] <ndee> NodeX, is that something you built yourself or are using an existing piece of software?
[10:02:37] <NodeX> does about 150k docs at a time
[10:02:41] <NodeX> built it myself
[10:02:53] <NodeX> quick dirty script that runs in a queue
[10:03:21] <ndee> hm, gonna do it also quick and dirty then :D
[10:03:26] <NodeX> that way I can aggregate over time by aggregationg the reports collection
[10:03:52] <NodeX> giving it near realtime plus timeline options
[10:03:56] <ndee> reports collection, smart :D
[10:04:55] <ndee> thanks for leading me into the right direction
[10:05:00] <NodeX> I also export my "history" collection - 3 days every day (I only keep 3 days of data in it) which keeps the aggreations fast
[10:05:15] <NodeX> I store the exported data as json in amazon arctic
[10:05:27] <NodeX> incase I ever need to aggregate on everything again
[10:05:33] <NodeX> no probs :)
[12:19:49] <esc> is there any way to convert zlibs output into unicode?
[12:22:15] <esc> stackoverflow helped
[12:23:30] <esc> http://stackoverflow.com/questions/11813677/how-to-insert-zlib-data-to-mongo-unicode-issue
[12:25:16] <the-gibson> s
[12:27:35] <esc> though my pymongo doesn't seem to have the binary module
[12:28:53] <esc> oh, it's in bson
[12:58:07] <sulaiman> how many minutes do I have to wait after I click on "Access My Course" to access my course? (M101: MongoDB for Developers).
[14:40:07] <scoates> Whoopsie: http://seancoates.com/blogs/mongodb-elections
[14:43:56] <vhpoet> Hi ppl I'm new to mongodb, and I have a very simple question. http://pastie.org/5119146 Can you please help me?
[14:44:56] <modcure> vhpoet, i bet its in the single quotes '2147483647'
[14:45:57] <NodeX> ^^ string vs int
[14:47:02] <vhpoet> ah ok, now I see. Thank you :)
[14:49:51] <BramN> Is it possible to use find() to find regexes, stored in MongoDB, that match a value passed in the query? So I store a number of regexes in Mongo and want to find the documents whose regex matches with a value i send to find()
[14:50:19] <NodeX> that's a percolator and no
[14:51:13] <BramN> NodeX, was afraid it wouldn't be possible...thanks anyway! Btw, what's a percolator? (feeling stupid now...)
[14:52:24] <NodeX> exactly what you described
[14:52:35] <NodeX> (is a percolator)
[14:53:27] <BramN> Okay, thanks...never heard of it before...guess I'l just have to loop through them...
[15:03:20] <bhosie> i understand that the oplog file will split multi updates into individual updates but what about batch inserts? are those also split into individual inserts? if i have a write intensive collection with batches of 20K docs being inserted roughly every 30 seconds, do i / should i consider increasing my oplog size? I have a 3 member replica set
[15:05:28] <vhpoet> Hey, one another very simple question. http://pastie.org/5119235
[15:07:34] <Gargoyle> vhpoet: Not entirely sure what dbref() is doing, but I'll have a guess that your user data is not being stored under the field names you think it is!
[15:08:10] <vhpoet> it's a reference to another collection's document
[15:08:41] <Gargoyle> IIRC, any such references are just driver sugar. It's not actually part of mongo.
[15:09:19] <Gargoyle> However, I suppose the answer might be to change your find to match the data. ie find({'user': DBRef(…)})
[15:09:44] <vhpoet> oh, ok, thank you. Maybe I should read php mongo doc for this.
[15:24:10] <nopz> What is the fastest, updating a whole document when I have to change only one key:value of this document, or use $set{key: value}
[15:24:51] <nopz> db.docs.update({'unique': 1234}, doc) or db.docs.update({'unique': 1234}, {$set: {key: value}})
[15:25:40] <skot> I would suggest $set
[15:27:27] <unpaidbill> Has anyone here used the perl driver and the aggregation framework together? I'm having some trouble figuring out how to structure my request
[15:27:29] <skot> vhpoet: there were two problems. One, field names are case sensitive, two, you need to use the correct type as has already been suggested: DBRef(…)
[15:28:10] <nopz> skot, Ok, because I'm not sure about performance for a single document.
[15:29:02] <skot> nopz: in general $set is best. It reduces network data, oplog storage, and is more readable in many cases.
[15:29:24] <nopz> ok, i'll stick to that so
[15:29:42] <nopz> thank you for your opinion on that point.
[15:51:52] <fatninja> if I would want to upload an image
[15:52:01] <fatninja> should I store it directly in a document
[15:52:07] <fatninja> or should I store the path to it in the doc ?
[15:52:51] <fatninja> I'm thinking the path, because if I don't need the image, it isn't necessary to retrieve it
[15:53:44] <jY> you can use gridfs to store the image
[15:54:13] <jY> if you don't want to use that.. i'd store it on the filesystem and the path in the doc
[15:54:21] <fatninja> an image is considered a "large file" ?
[15:54:32] <fatninja> Related driver docs: PHP, Java, Python, Ruby, Perl
[15:54:38] <fatninja> plus I'm using server-side javascript
[15:54:43] <fatninja> so it looks like I don't have a driver for it
[15:57:19] <NodeX> there is a node.js driver
[15:57:52] <NodeX> also look at gridfs if you want to store it in the database
[15:59:19] <esc> is mongodb developed by a company?
[16:03:16] <skot> The main contributor is 10gen, yes.
[16:03:22] <finalspy> Hi everybody, new to mongo and sharding, I'd like to know how to shard a collection on a key that may be null, is there a way to do that ?
[16:03:48] <skot> yes, null is a valid value; the shard key must be immutable for the docs so you cannot change it.
[16:04:12] <skot> (a delete + insert works if you need to change the value)
[16:04:49] <finalspy> in fact i get this : "errmsg" : "found null value in key { area: null } for doc: { ...
[16:05:25] <finalspy> when running db.runCommand("shardcollection" ...
[16:06:05] <finalspy> i'm using mongo 2.2.0 on linux mint (debian) installed from 10gen repos
[16:07:52] <esc> is there a fast way to check if an _id is contained in a collection?
[16:08:42] <kali> find ? :)
[16:09:33] <esc> perhaps w/o returning the document?
[16:10:17] <esc> i'm using find_one at the moment
[16:10:58] <skot> you can do this: db.coll.count({_id:value}) and see if it 0/1
[16:11:39] <esc> skot: thanks, i'll try that
[16:12:58] <skot> you can also use a covered query to only use the index to return only the _id value, but count is effectively the same.
[16:32:10] <finalspy> so I want to shard collection, my index is on area field but sh.shardCollection(... gives me "errmsg" : "found null value in key { area: null } for doc: { ...
[16:32:51] <finalspy> In fact I noticed that some docs didn't even contain the area atttribute which mongo is complaining to be null ...
[16:33:23] <finalspy> Does that mean I can only use shard keys from fields presents in all documents of a collection ?
[16:37:28] <finalspy> So question is : Is it possible to use a field not present on all the documents of a collection as a shard key ? ... seems not but I cant find anything on that.
[16:46:10] <kevin_> Hi Guys, quick question, is there a special option in mongo to get the distance from the queried point to the records received, ex, place 1 is 0.1m away, place 2 is 0.11m away
[16:46:58] <kevin_> (and also sort the results by distance)
[16:47:31] <kevin_> ( nevermind, found geoNear )
[16:58:58] <skiz> finalspy: no it's not possible afaik.
[16:59:13] <skiz> the config server wouldn't know where to find it if the shard key is missing
[17:06:27] <finalspy> skiz: so what I have to do is add the attribute in every document with null value or do you think of a better way to do that ?
[17:09:26] <esc> skot: my collection's count takes only a single arg
[17:42:20] <kevin_> what is the measurement of maxDistance in the geospatial call? meters, miles, kilometers?
[17:42:25] <kevin_> (for instance, if I put $maxDistance: 5 is that 5 miles or 5 meters)
[17:45:42] <wtpayne> Hi!
[17:46:44] <wtpayne> Any 10gen people here I could ask a question to?
[17:47:10] <ruffyen> the 10gen blog is down
[17:47:58] <leandroa> hi, is there a way to disable notablescan option for shell queries?
[17:49:18] <wtpayne> @ruffyen - as is the mongodb blog.
[17:49:45] <wtpayne> Anybody here got experience recovering when you run out of disk space?
[17:49:56] <ruffyen> it was down yesterday as well
[17:51:40] <wtpayne> Not a good sign, eh?
[17:51:42] <wtpayne> :-)
[17:52:11] <ruffyen> heh
[17:52:23] <ruffyen> i figured maybe it had soemthing to do with the AWS EC2 crash
[17:52:44] <wtpayne> Could be .. but EC2 has been back up (for us at least) for a while now...
[17:52:49] <ruffyen> yeah
[17:53:00] <ruffyen> figured i would come here and make sure someone knew about it :D
[17:53:14] <wtpayne> Dutiful citizen that you are...
[17:53:15] <wtpayne> :-)
[17:53:28] <wtpayne> (polite applause)
[17:53:43] <ruffyen> more to the point there are soem articles i wanted to read during my lunch break :D
[17:53:49] <wtpayne> heh.
[17:54:28] <wtpayne> Good thing mongodb is nice & simple. - not all that much reading required.
[17:54:32] <ruffyen> as for your space recovery, no i never have but you should be able to just stop mongod, add space to drive, delete mongo.lock and then start mongod again
[17:54:47] <wtpayne> Heh. I cannot add space to the drive, unfortunately.
[17:55:03] <ruffyen> can you get rid of data?
[17:55:22] <wtpayne> I am considering just stepping in and deleting files from the data directory directly.
[17:55:29] <ruffyen> or, add another mongo instance and shard?
[17:55:43] <wtpayne> I tried dropping the DBs from within mongo, but I get an exception.
[17:55:44] <ruffyen> you can just drop collections
[17:56:13] <wtpayne> (trying that right now)
[17:56:23] <ruffyen> well im pretty sure if you just willy nilly start killing .0 .1 you will have bigger issue
[17:56:35] <ruffyen> s
[17:56:57] <wtpayne> Yeah.. dropping collections does not work either.
[17:57:04] <ruffyen> what is the exception?
[17:57:10] <wtpayne> I keep getting SyncClusterConnection::update prepare failed exceptions.
[17:57:29] <wtpayne> ... Actually, come to think about it, it might be the config servers that have gone down.
[17:57:58] <ruffyen> yeah googling that errro points at config servers
[18:02:49] <wtpayne> Thanks... trying to find out which machines the config servers are on and bounce them.
[18:14:44] <wtpayne> Drat. Bouncing the config servers did not work.
[18:24:37] <wtpayne> Ok. So I have a bunch of DBs, some of which are old and not needed anymore.
[18:25:33] <wtpayne> I am happy to totally loose those DBs, so deleting the files: <DBNAME>.* from the mongodb data directory (on all shards) does not seem (to me) to be that unreasonable.
[18:25:50] <wtpayne> Particularly since the dropDatabase() command does not work.
[18:39:50] <wtpayne> Nobody going to try and stop me?
[18:39:53] <wtpayne> :-)
[18:49:08] <ruffyen> have fun :D
[18:53:26] <wtpayne> Well ... I tried. Didn't work.
[18:53:28] <wtpayne> :-(
[18:53:38] <wtpayne> But nothing exploded, either, so....
[19:07:37] <ruffyen> +1
[19:21:19] <WoodsDog> we just upgraded from 2.0 to 2.2. we have a read-only user that was able to do a backup with mongodump
[19:21:31] <WoodsDog> he is no longer allowed to do the backup, mongodump fails
[19:21:56] <WoodsDog> is there a way to do a back up from a read-only user?
[19:25:27] <sinisa> whats cons of using too much aggregation framework :)
[19:31:03] <crudson> didn't give us much chance to answer that one...
[19:40:12] <wtpayne> ... man in a hurry.
[19:40:14] <wtpayne> :-)
[20:57:14] <jergason> hello friends
[21:02:08] <meghan> hello jergason
[21:20:35] <Dr{Wh0}> trying to test sharding and see how it scales but I am not getting expected results. I have 4 shards setup and if I run my test insert app to add 5m rows as fast as possible I get 120k/s inserts if I direct each app to a specific shard. If I run just 2 apps connected to 2 separate routers connected to a shareded collection where I see "ok" distribution I end up with about 30k/s so it seems as if it does not scale correctly. Where could the bottleneck be?
[21:20:37] <Dr{Wh0}> I tried with 1 router or 2 routers I have 3 config servers.
[22:07:25] <TecnoBrat> one of the advantages is family, heh
[22:07:30] <TecnoBrat> whoops MT
[23:32:17] <jmpf> I must be doing something wrong -- http://pastie.org/private/xgs1ulpuzy2ljbe46ciq0q - why is it doing a full collection scan even w/hint?