[00:28:48] <diegok> I'm testing my shard setup using some collections and I see this on the logs: https://raw.github.com/gist/3520917/62f1185fbbd477e848546c6db05b13a6f75db2a6/gistfile1.txt
[00:29:25] <diegok> Don't find anything on google and I don't know how to look further :-/
[02:27:10] <dirn> @Init--WithStyle- fileSize is total size of all files allocated from the database. it comes from db.stats(). you can get more details at http://www.mongodb.org/display/DOCS/Monitoring+and+Diagnostics
[02:28:02] <dirn> actually, http://docs.mongodb.org/manual/reference/database-statistics/ might be a better reference
[02:33:27] <crudson> db.serverStatus() will give you a load of stuff
[02:49:18] <retran> waaa i wanna know if i should use mongo or couch
[03:53:06] <hdm> hello, running into a fun issue with 2.2; sent 11 million upserts, but the record count isnt changing, however the journal files continue to grow.
[09:41:03] <DinMamma> Hello yall. I have a 3 part replica set, all the members run 2.0.7 and Im in the process of upgrading to 2.2. I have stopped all write activity to the databases and I upgraded one of the slaves. When the upgrade was complete it had the state of "syncing to: PRIMARY:PORT" which I think is a bit odd since the optime was the same on all machines before the upgrade of the first node.
[09:41:10] <DinMamma> Is this something to be alarmed about?
[09:41:25] <DinMamma> The message is gone now tough.
[09:50:00] <DinMamma> Same thing happening to the second node ive upgraded. Syncing to master..
[11:14:42] <PDani> in case of autosharding, suppose I have some "hot" chunks, which are accessed more frequently than others. Is there any strategy in mongos which moves these chunks to spread the load across instances?
[11:15:13] <Derick> PDani: balancing is per data size - on shard key. mongos doesn't keep stats on what's used AFAIK
[11:15:42] <Derick> PDani: sounds that it might be a feature though...
[12:16:27] <OmidRaha> When i run dex (https://github.com/mongolab/dex) , it always return zero for all results, but i have many records in system.profile collection.
[13:06:05] <gigo1980> yes i mean the mongoshel, it work, fine
[13:09:14] <diegok> Has anyone any idea where to look at to understand this problem: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/KZ2dCOrpPmE ?
[13:47:16] <Gargoyle> Anyone tried using the latest PHP driver from github?
[13:48:02] <Gargoyle> I'm getting undefined symbol _mongo_say errors.
[13:49:04] <Derick> Gargoyle: master has that problem, yes
[13:50:00] <Derick> hoping to get 1.3.0beta2 out tomorrow
[14:11:17] <snizzo> I have a big amount of data. I have to find the exact data if there are let's say x and y tags and without j and z. What is the best performance oriented approach to this kind of problem?
[14:21:12] <wereHamster> find exists:x and y, not exists j and z.
[14:25:48] <Gargoyle> Is there any reason why calling collection->save($doc) on the same document in quick succession might see documents being duplicated in the db… well, actually, the dups seem to be empty.
[14:27:19] <Aram> hi, is it possible to do tail -f -n0 instead of tail -f with capped collections and tailable cursors?
[14:27:35] <Aram> basically I care for new stuff, but not what was already in the collection.
[14:30:20] <Gargoyle> Or perhaps modifying a collection while you are looping over the cursor?
[14:31:06] <Aram> I see something about an OPLOGREPLAY option but that requires a timestamp field...
[14:32:37] <tncardoso> Gargoyle: save usually (most clients) overwrites the whole object. If you want to update an object you should use update with $set
[14:49:06] <Gargoyle> Derick: I have reverted to branch "1.2.0" and still getting the error.
[14:51:57] <hdm> will create the main doc and the sub doc if it doesnt exist, and update it otherwise
[14:58:38] <Gargoyle> I think I am still getting the problem after switching to branch v1.2 and doing make clean,phpize,./configure,make,make install,apache restart.
[14:58:49] <Gargoyle> Just going to reboot the servers
[15:12:34] <Gargoyle> I updated to 2.0.7 yesterday, and started seeing apache complaining about a buffer overflow.
[15:13:19] <Gargoyle> So I updated the PHP driver, and then got undefined symbol errors. I rolled back to v1.2, but I must be typing too quick (client is looking at the test site)
[15:13:54] <Gargoyle> Anyway, I have 1 server seemingly back to normal, so I have knocked the second off the LB, and will redo things at a slower pace.
[15:16:24] <NodeX> you wouild think that using apache would give you a slower speed!
[15:18:59] <Gargoyle> Yup. an ID and one of the setting from my model - as if mongo had returned an empty doc during iteration and I had created a new empty model.
[15:19:44] <NodeX> so you had one fullly saved doc with one _id and one empty doc with just _id
[15:20:37] <Gargoyle> Any thoughts on why one box is making a mongo.la and a mongo.so and the other isn't (Both servers should be identical)
[15:21:50] <NodeX> are you building the driver from source ?
[15:37:20] <jmar777> leandroa: i'm not sure if there's a way to do it implicitly, but explicitly, you can do `var db = connect("<secondary-connection-string>");`
[15:59:09] <felix___> Quick question - when mongodb gets a query for a partial record, does it read the whole record off disk and then return the requested fields? Or does it only read the records off disk that are requested?
[15:59:46] <felix___> Which is to say, I have data which consists of a bunch of objects that have a small amount of top line data and then large quantities of additional details.
[16:00:39] <felix___> mostly I'll be requesting the top line data and sometimes wanting to pull the details, debating whether to have a single collection with everything or a small top line data collection and a second collection 1:1 with top line that has the details.
[16:06:36] <wereHamster> felix___: my guess is that it has to read everything, so it can parse the object
[16:12:08] <ggoodman> when sorting on multiple fields how does mongodb determine the relative priority of each field (ie: sort by last name THEN first name)
[16:16:38] <thomasthomas> I am trying to query dates embeded inside of a array, the date that I want is defined by other values in the array mainly (kind=sale | kind=capture & status=success) can I process which array created_at I want using mongo / javascript or do I have to include the field on insert https://gist.github.com/3532063
[16:23:06] <remonvv> Hi guys. A question with some urgency. We added two shards to a cluster but it refuses to move chunks there. Nothing in the logs.
[16:36:38] <remonvv> "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress"
[16:41:44] <ajsharp> Is it possible to atomically update embedded arrays? Basically, I'm trying to model an Inbox, and I want the inbox all kept in a single document
[16:42:25] <ajsharp> {user_id: 1, conversations: [ {id: 2, messages: [{id: 3, body: 'this is the message body'}] } ]}
[16:42:45] <ajsharp> I want to perform atomic push operations on a particular conversation's messages array
[16:55:13] <kzoo> Which lucene query method should I be using to search for a particular ObjectId? IdsQuery() seems to use the _uid attribute and not the document _id
[17:09:18] <Gargoyle> Dunno what I did on box number 2 to make it work, but I have had to copy the mongo.so file over.
[17:09:52] <Gargoyle> deleteing the driver dir, and recloning branch v1.2 still didnt work. Both servers are running the same software. #mystery!
[17:16:47] <Dr{Who}> Q. is it possible to dump the mongo journel simmilar to the mysql replog I need to find when a query was run and hopefully more info on the query.
[17:18:17] <Gargoyle> I've just done a make clean and ./configure on both and am getting a diff.
[17:18:19] <Gargoyle> < checking whether to include code coverage symbols... no
[17:24:39] <remonvv> WARNING: 2.2 shards cannot be added to a 2.0 cluster as opposed to what is advertised in changelog.
[17:51:48] <estebistec> I don't see the new expireAfterSeconds feature of ensure_index documented for pymongo. Safe to assume the kwarg is expire_after_seconds?
[18:12:03] <Jester01> hey folks, I have a collection where Id is a guid and it has its index. For paging, I am using skip and sort on that id, but it's slow. am I doing something wrong?
[18:13:00] <Jester01> and log says "nscanned" includes all the skipped records too
[18:50:24] <kgraham> i recently had to replace a shard (one shard in a three shard clusteR)
[18:50:41] <kgraham> I copied over the data to the new server, repointed dns, at the new server and now i get this error
[18:50:43] <kgraham> printShardingStatus: not a shard db!
[18:50:51] <kgraham> shardsvr is true, not sure how to fix this
[18:50:59] <kgraham> any help would be greatly appreciated
[18:51:52] <kgraham> the other two 'db.printShardingStatus()' show correrct information
[19:01:47] <dstorrs> kgraham: does the new shard box know where the config servers are?
[19:04:05] <dstorrs> question for the room: I've got ~1T of data currently split between two shard boxes. I'm about to add a third. Is there any way that I can get an educated guess on how long the re-sharding process will take?
[19:04:29] <kgraham> dstorrs: thank you, how do i check this?
[19:11:00] <kgraham> printShardingStatus: not a shard db!
[19:11:14] <kgraham> on the other two, it shows expexted info
[19:11:28] <dstorrs> it's *possible* that the new shard is still in the process of getting caught up, and it will start reporting as a shard when it's ready.
[19:11:31] <kgraham> the mongos' processare re routing traffic to all them
[19:24:02] <dstorrs> anyone know how to estimate how long it will take to re-shard 1T of data when a new box is added?
[19:25:56] <dstorrs> kgraham: I'm fairly new to Mongo myself, and have only set up sharding once. We did get more or less the same situation you're in, and it resolved itself after a while.
[19:26:16] <dstorrs> Aside from that and the config server thing, I'm not sure what to recommend. Sorry. :<
[19:41:53] <hdm> greeeeps: also check your tps with iostat, concurrency hurts performance once you max out iops
[19:47:15] <greeeeps> i will make it one Mongo instance
[19:47:32] <greeeeps> iostats for a possibility of IO bottleneck ?
[19:52:14] <eka> hi all... did someone tried TTL with pymongo? can't make it work though
[20:11:56] <dabar> Hello. I would like to know whether MongoDB would help me with producing excerpts around matched content in searches with LIKE '%%' and LIKE '%%"
[20:12:37] <dabar> I am using a preg_match with PHP at the moment, and it is too slow when using more than 4 keywords cause I have to account for permutations unlike in LIKE AND LIKE...
[20:43:33] <hydester> any recommended Linux query tools recommended for working with complex documents? i use the console which is sufficient for some things. but a richer gui/console could have its advantages. i know about http://www.mongodb.org/display/DOCS/Admin+UIs, but i was hoping for some personal recommendations.
[20:53:40] <mog_> Hi there, I'm using mongodb with the native node.js driver and I'm witnessing something that surprises me: during a find() query, after the cursor is exhausted thanks to the each function, a last call is made with the value null, and it seems to idle afterwards. Any idea if it's normal or not?
[21:53:56] <Almindor> if you have documents with an array of ints which can be empty in ~1/3 chance, is it better to use null/undefined for the field or [] if you search in the array (and the collection is quite large)
[23:13:38] <dstorrs> I have just now added a new shard box to my cluster. db.printShardingStatus() lists it under the 'shards' section at top, but not under the 'databases' section below. I think this is simply because data has now moved over yet. How would I confirm this?
[23:30:19] <thomasthomas> I need to batch insert an array of objects as documents via node & mongolian any ideas?
[23:36:10] <dstorrs> if anyone can answer this sharding question, it would be much appreciated: http://stackoverflow.com/questions/12207059/mongodb-new-shard-appears-but-is-not-showing-content-is-this-expected
[23:43:38] <thedahv> Hello everybody. Does the ruby driver support populating linked fields?
[23:44:16] <thedahv> For example, I can do "Model.findOne({id: something}).populate('linked_field')" to get the full linked document in the returned object
[23:44:59] <thedahv> Maybe it's called something else in the ruby driver