PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 30th of August, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:28:48] <diegok> I'm testing my shard setup using some collections and I see this on the logs: https://raw.github.com/gist/3520917/62f1185fbbd477e848546c6db05b13a6f75db2a6/gistfile1.txt
[00:29:25] <diegok> Don't find anything on google and I don't know how to look further :-/
[00:29:52] <diegok> using 2.2
[00:40:11] <daniel____> 有人在吗
[00:42:08] <daniel____> mongodb里做sharding只有范围分区吗?就不能用hash?
[00:45:37] <diegok> 不仅行列:)
[00:46:42] <daniel____> 什么意思?
[00:54:44] <diegok> 我只说英语或西班牙语,我使用谷歌翻译。很抱歉:-/
[00:54:54] <daniel____> OK
[00:59:16] <daniel____> i means that just i can only use range on field for my partition sharding?
[01:15:11] <snizzo> hey
[01:15:43] <snizzo> I'm looking some docs or places to learn high performace mongodb. I mean 50+ million documents databases
[01:15:48] <snizzo> any hint? :)
[01:30:33] <jarrod> probably sharding and indexing
[02:00:18] <Init--WithStyle-> under mongoHub , any idea what "fileSize" refers to when looking at database stats?
[02:00:54] <Init--WithStyle-> I'm trying to figure out what the total footprint of my database currently is
[02:01:17] <Init--WithStyle-> memory wise
[02:27:10] <dirn> @Init--WithStyle- fileSize is total size of all files allocated from the database. it comes from db.stats(). you can get more details at http://www.mongodb.org/display/DOCS/Monitoring+and+Diagnostics
[02:28:02] <dirn> actually, http://docs.mongodb.org/manual/reference/database-statistics/ might be a better reference
[02:33:27] <crudson> db.serverStatus() will give you a load of stuff
[02:49:18] <retran> waaa i wanna know if i should use mongo or couch
[03:53:06] <hdm> hello, running into a fun issue with 2.2; sent 11 million upserts, but the record count isnt changing, however the journal files continue to grow.
[07:37:30] <[AD]Turbo> hi there
[07:49:33] <brianpWins> is there a verified ppa for mongodb that I can use in ubuntu ?
[08:12:11] <IAD> brianpWins: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian-or-ubuntu-linux/
[08:19:35] <brianpWins> IAD thanks! I just came across that a few minutes ago. install went well. Thanks for the link!
[08:53:15] <eliOcs> hello
[08:53:33] <eliOcs> how can I query the last N modified documents?
[08:53:52] <eliOcs> Stack Overflow states I should sort by the data_ins property
[08:54:07] <eliOcs> can I index that property?
[09:03:07] <TrahDivad> Can someone tell me if _id can be geospatial?
[09:05:28] <NodeX> no
[09:05:49] <NodeX> eliOcs : save the timestamp
[09:06:26] <NodeX> TrahDivad : geospatial needs to be in a certain format and _id is needed as a string or ObjectId
[09:06:38] <algernon> erm.
[09:06:41] <algernon> _id can be anything.
[09:06:43] <TrahDivad> NodeX: Thanks for the answer!
[09:06:50] <NodeX> not a geo spatial
[09:06:58] <NodeX> at least not last time I checked
[09:07:33] <algernon> haven't tried that, but it can definitely be other than string or oid
[09:08:16] <NodeX> well yes obvisouly an int etc, I was pointing out that geo is specific more than _id
[09:09:13] <algernon> righty-o. _id can't be an array.
[09:09:25] <algernon> interestingly, it can be another document
[09:25:53] <eliOcs> NodeX, thanks I agree thats the best option
[09:26:52] <NodeX> if you need to query the latest added you can use _id and $natural :-1
[09:27:00] <NodeX> (for the sort)
[09:41:03] <DinMamma> Hello yall. I have a 3 part replica set, all the members run 2.0.7 and Im in the process of upgrading to 2.2. I have stopped all write activity to the databases and I upgraded one of the slaves. When the upgrade was complete it had the state of "syncing to: PRIMARY:PORT" which I think is a bit odd since the optime was the same on all machines before the upgrade of the first node.
[09:41:10] <DinMamma> Is this something to be alarmed about?
[09:41:25] <DinMamma> The message is gone now tough.
[09:50:00] <DinMamma> Same thing happening to the second node ive upgraded. Syncing to master..
[10:21:48] <Mmike> hi, lads
[10:22:05] <Mmike> I just upgraded mongo to latest one (from mongodb debian repos), and when I want to start mongodb, I get this:
[10:22:07] <Mmike> Starting database: mongodbstart-stop-daemon: unrecognized option '--interleave=all'
[10:23:26] <Derick> yeah, I had reported that too
[10:23:33] <Derick> they're looking at it I think
[10:24:38] <Mmike> gnj, should have checked before the upgrade :)
[10:25:06] <NodeX> comment out the switch ;)
[10:25:38] <Derick> as a quickfix, in /etc/init.d/mongodb, add after the "fi" in line 74:
[10:25:43] <Derick> NUMACTL=""
[10:25:48] <Derick> NUMACTL_ARGS=""
[10:31:28] <Mmike> yeps
[10:31:35] <Mmike> thnx Derander
[10:31:38] <Mmike> erm, Derick
[10:34:48] <Derick> we'll get new packages out soon™
[10:41:49] <RoastedBeef> Compiling 2.2.0 on Freebsd 9 with GCC4.7 failed
[10:41:59] <RoastedBeef> Same compile procedure worked for 2.0.7
[10:43:17] <RoastedBeef> The culprit seems to be /third_party/boost/libs/program_options/src/utf8_codecvt_facet.o
[10:43:25] <RoastedBeef> Which generated about 2 pages of errors
[11:02:08] <ezakimak> does a unique index allow duplicates for nulls and/or undefineds?
[11:02:30] <PlasmaBunny> hi
[11:02:38] <PlasmaBunny> what is the right syntax to remove an item?
[11:02:45] <PlasmaBunny> for some weird reason mongodb won't remove anything
[11:02:52] <PlasmaBunny> bunny.synapses.remove()
[11:02:54] <PlasmaBunny> is it correct?
[11:03:15] <ezakimak> bunny.synapses.remove({_id: someid })
[11:03:57] <PlasmaBunny> shouldn't remove() remove anything?
[11:04:37] <ezakimak> what is bunny.synapses.count()?
[11:06:42] <PlasmaBunny> 41
[11:06:45] <PlasmaBunny> ezakimak:
[11:06:58] <PlasmaBunny> and if i do db.synapses.find() i get results
[11:07:07] <PlasmaBunny> but if i do db.synapses.remove() - nothing happens
[11:07:10] <PlasmaBunny> and i still get results later
[11:07:13] <PlasmaBunny> WTF
[11:07:27] <PlasmaBunny> suprisingly, phpMoAdmin removes anything just fine
[11:07:31] <PlasmaBunny> what am i doing wrong?
[11:08:06] <ezakimak> is it replicated?
[11:08:06] <PlasmaBunny> remove not working
[11:08:08] <PlasmaBunny> no.
[11:08:23] <Derick> PlasmaBunny: do you really type "bunny.synapses.remove({_id: someid })" or "db.synapses.remove({_id: someid })" ?
[11:08:23] <PlasmaBunny> what is the right way to remove anything?
[11:08:32] <PlasmaBunny> db.synapses.remove({neuroblock1:'503f6f6d85bbf2084c000004'})
[11:08:37] <PlasmaBunny> nothing is removed
[11:08:41] <PlasmaBunny> db.synapses.find({neuroblock1:'503f6f6d85bbf2084c000004'})
[11:08:45] <ezakimak> is there an error returned?
[11:08:46] <PlasmaBunny> all results are still there
[11:08:48] <PlasmaBunny> no.
[11:08:58] <ezakimak> get last error is empty?
[11:09:06] <Derick> is 503f6f6d85bbf2084c000004 not an ID?
[11:09:10] <Derick> in that case:
[11:09:14] <PlasmaBunny> ezakimak: not empty, but not related to these query
[11:09:17] <ppetermann> hey Derick
[11:09:20] <PlasmaBunny> Derick: it's neuroblock1 field
[11:09:23] <Derick> db.synapses.find({neuroblock1: ObjectId( '503f6f6d85bbf2084c000004') })
[11:09:26] <PlasmaBunny> find works by it, remove does not
[11:09:40] <Derick> PlasmaBunny: show a session in a pastebin that demonstrates that please
[11:09:40] <PlasmaBunny> well, find works
[11:09:47] <Derick> ppetermann: hi :-)
[11:09:58] <ppetermann> havent read you in a while =)
[11:10:22] <PlasmaBunny> Derick: http://dpaste.org/a5dIt/raw/
[11:10:24] <PDani> hi
[11:10:31] <PlasmaBunny> is it possible to be a bug in mondogb?
[11:10:33] <PlasmaBunny> mongodb *
[11:10:55] <ezakimak> i would suspect that last
[11:11:04] <Derick> PlasmaBunny: remove() only removes one document by default
[11:11:13] <ezakimak> not according to the docs
[11:11:15] <PlasmaBunny> well, it removes nothing.
[11:11:20] <ezakimak> perhaps a documentation bug then :)
[11:11:43] <Derick> PlasmaBunny: can you show that? Do a count(), remove() and count().
[11:12:17] <PlasmaBunny> http://dpaste.org/sq4md/raw/
[11:12:28] <PlasmaBunny> buuuuuuuuuut... phpMoAdmin removes anything fine....
[11:12:37] <PlasmaBunny> is there a way to see mongo queries log?
[11:13:56] <PlasmaBunny> aaah
[11:13:58] <PlasmaBunny> hmmm
[11:14:03] <Derick> PlasmaBunny: can you do a find() before the remove() first?
[11:14:07] <PlasmaBunny> there was some updates running in loop
[11:14:15] <PlasmaBunny> i stopped php process and it removed nice
[11:14:18] <PlasmaBunny> O_O
[11:14:25] <PlasmaBunny> O_O
[11:14:26] <Derick> so, user error :-)
[11:14:42] <PDani> in case of autosharding, suppose I have some "hot" chunks, which are accessed more frequently than others. Is there any strategy in mongos which moves these chunks to spread the load across instances?
[11:15:13] <Derick> PDani: balancing is per data size - on shard key. mongos doesn't keep stats on what's used AFAIK
[11:15:32] <PDani> thanks
[11:15:42] <Derick> PDani: sounds that it might be a feature though...
[12:16:27] <OmidRaha> When i run dex (https://github.com/mongolab/dex) , it always return zero for all results, but i have many records in system.profile collection.
[12:16:35] <OmidRaha> {
[12:16:35] <OmidRaha> "linesPassed": 0,
[12:16:35] <OmidRaha> "linesRecommended": 0,
[12:16:35] <OmidRaha> "results": [],
[12:16:35] <OmidRaha> "uniqueRecommendations": 0,
[12:16:35] <OmidRaha> "linesProcessed": 0
[12:16:35] <OmidRaha> }
[12:16:56] <OmidRaha> > db.getProfilingLevel()
[12:16:56] <OmidRaha> 1
[12:17:04] <OmidRaha> > db.system.profile.count()
[12:17:04] <OmidRaha> 67
[12:17:49] <NodeX> use a pastebin
[12:19:21] <ron> heh, kinda amusing mongo produced an application called dex.
[12:21:11] <OmidRaha> http://pastie.org/4616060
[12:23:16] <NodeX> perhaps ask the creator of "dex"
[12:27:40] <qpok> unless using authentication, I can safely use drivers from 2.0.7 era against 2.2.0?
[12:28:12] <qpok> c# drivers to be more precies, 1.4.2.
[12:36:00] <OmidRaha> fixes : $dex -p -n "sampledb.*" mongodb://dex:dexpass@localhost/sampledb
[12:38:04] <NodeX> OmidRaha : add the issue to github or talk to the author
[12:51:31] <gigo1980> hi i get an error each is not a function if i make an find() to an collection
[12:51:42] <gigo1980> any idea why i can not call this ?
[12:52:15] <NodeX> what's the query?
[12:53:06] <NodeX> http://docs.mongodb.org/manual/reference/operator/each/ <---- I imagine you're using it for a query and not an update
[13:02:20] <jmar777> gigo1980: are you talking about the mongo shell? e.g., `db.someCollection.find().each(...)`?
[13:02:28] <jmar777> gigo1980: if so, try `forEach()` instead
[13:03:01] <jmar777> gigo1980: e.g., `db.someCollection.find().forEach(function(doc) { printjson(doc); })
[13:05:30] <gigo1980> thx
[13:06:05] <gigo1980> yes i mean the mongoshel, it work, fine
[13:09:14] <diegok> Has anyone any idea where to look at to understand this problem: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/KZ2dCOrpPmE ?
[13:47:16] <Gargoyle> Anyone tried using the latest PHP driver from github?
[13:48:02] <Gargoyle> I'm getting undefined symbol _mongo_say errors.
[13:49:04] <Derick> Gargoyle: master has that problem, yes
[13:49:22] <Derick> and 1.3.0beta1 too I suppose
[13:49:33] <Derick> we're working on it :-)
[13:49:34] <Gargoyle> Ah. Should I revert to 1.2 branch, or a specifig commit?
[13:49:41] <Derick> 1.2 for now
[13:49:48] <Derick> or rather, v1.2
[13:50:00] <Derick> hoping to get 1.3.0beta2 out tomorrow
[14:11:17] <snizzo> I have a big amount of data. I have to find the exact data if there are let's say x and y tags and without j and z. What is the best performance oriented approach to this kind of problem?
[14:21:12] <wereHamster> find exists:x and y, not exists j and z.
[14:25:48] <Gargoyle> Is there any reason why calling collection->save($doc) on the same document in quick succession might see documents being duplicated in the db… well, actually, the dups seem to be empty.
[14:27:19] <Aram> hi, is it possible to do tail -f -n0 instead of tail -f with capped collections and tailable cursors?
[14:27:35] <Aram> basically I care for new stuff, but not what was already in the collection.
[14:30:20] <Gargoyle> Or perhaps modifying a collection while you are looping over the cursor?
[14:31:06] <Aram> I see something about an OPLOGREPLAY option but that requires a timestamp field...
[14:32:37] <tncardoso> Gargoyle: save usually (most clients) overwrites the whole object. If you want to update an object you should use update with $set
[14:49:06] <Gargoyle> Derick: I have reverted to branch "1.2.0" and still getting the error.
[14:49:27] <Derick> v1.2
[14:49:30] <Derick> not 1.2.0
[14:49:36] <hdm> Gargoyle: you might want to rework this as an upsert instead
[14:49:48] <hdm> atomic and prevents dupes
[14:51:25] <hdm> ruby syntax -> collection.update({ 'primary_key' => 'abc' }, { '$set' => { 'subdocument.value' => 1 }}, { :upsert => true }})
[14:51:57] <hdm> will create the main doc and the sub doc if it doesnt exist, and update it otherwise
[14:58:38] <Gargoyle> I think I am still getting the problem after switching to branch v1.2 and doing make clean,phpize,./configure,make,make install,apache restart.
[14:58:49] <Gargoyle> Just going to reboot the servers
[15:11:25] <NodeX> what's the problem Gargoyle
[15:12:34] <Gargoyle> I updated to 2.0.7 yesterday, and started seeing apache complaining about a buffer overflow.
[15:13:19] <Gargoyle> So I updated the PHP driver, and then got undefined symbol errors. I rolled back to v1.2, but I must be typing too quick (client is looking at the test site)
[15:13:54] <Gargoyle> Anyway, I have 1 server seemingly back to normal, so I have knocked the second off the LB, and will redo things at a slower pace.
[15:16:24] <NodeX> you wouild think that using apache would give you a slower speed!
[15:16:32] <NodeX> slow enough speed*
[15:17:00] <Gargoyle> I meant I was probably making mistakes in my typing.
[15:17:32] <NodeX> ah
[15:17:42] <NodeX> with your save() error were they safe writes?
[15:17:53] <Gargoyle> ahh. yeah.
[15:17:55] <NodeX> and when you say blank dupes, did they have an _id ?
[15:17:59] <Gargoyle> That's a console script.
[15:18:59] <Gargoyle> Yup. an ID and one of the setting from my model - as if mongo had returned an empty doc during iteration and I had created a new empty model.
[15:19:44] <NodeX> so you had one fullly saved doc with one _id and one empty doc with just _id
[15:20:37] <Gargoyle> Any thoughts on why one box is making a mongo.la and a mongo.so and the other isn't (Both servers should be identical)
[15:21:50] <NodeX> are you building the driver from source ?
[15:21:56] <Gargoyle> yup
[15:22:28] <Gargoyle> back in a bit.
[15:22:32] <NodeX> same PHP versions / php-dev versions?
[15:22:43] <Derick> odd
[15:22:51] <Derick> it should always make an .la
[15:22:55] <Derick> (too)
[15:23:14] <Gargoyle> going to double check everything later. Gotta duck out for a few hours.
[15:24:00] <Gargoyle> Derick: Perhaps even more oddly then, is the one that is working is the one that hasnt built an .la
[15:24:04] <Gargoyle> laters
[15:25:03] <kali> is that too much already ?
[15:28:06] <leandroa> Is there a way to say a .js file executed by shell to read from secondaries?
[15:34:47] <jmar777> leandroa: like an import?
[15:34:55] <jmar777> leandroa: you can do `load('./other-script.js');`
[15:35:12] <jarrod> does mongo support batch jobs yet
[15:35:49] <hdm> you can run batch jobs on your own, why would mongo need it builtin?
[15:35:53] <leandroa> jmar777: no, I mean when I execute an script: `mongo database script.js`
[15:36:03] <jarrod> so i only need to send one 'request' over a socket
[15:36:18] <leandroa> jmar777: in python driver I can say: read_preference = SECONDARY
[15:36:25] <hdm> jarrod: you could send a server-side eval request
[15:36:30] <jmar777> leandroa: oh... gotcha, obviously i interpreted that wrong lol
[15:36:30] <jarrod> oh
[15:36:33] <hdm> do multiple things that way
[15:36:36] <jarrod> i think i read about that
[15:36:40] <jarrod> let me look into that, thanks
[15:36:55] <leandroa> brb
[15:37:20] <jmar777> leandroa: i'm not sure if there's a way to do it implicitly, but explicitly, you can do `var db = connect("<secondary-connection-string>");`
[15:38:31] <jarrod> yay 2.2
[15:59:09] <felix___> Quick question - when mongodb gets a query for a partial record, does it read the whole record off disk and then return the requested fields? Or does it only read the records off disk that are requested?
[15:59:46] <felix___> Which is to say, I have data which consists of a bunch of objects that have a small amount of top line data and then large quantities of additional details.
[16:00:39] <felix___> mostly I'll be requesting the top line data and sometimes wanting to pull the details, debating whether to have a single collection with everything or a small top line data collection and a second collection 1:1 with top line that has the details.
[16:06:36] <wereHamster> felix___: my guess is that it has to read everything, so it can parse the object
[16:12:08] <ggoodman> when sorting on multiple fields how does mongodb determine the relative priority of each field (ie: sort by last name THEN first name)
[16:16:38] <thomasthomas> I am trying to query dates embeded inside of a array, the date that I want is defined by other values in the array mainly (kind=sale | kind=capture & status=success) can I process which array created_at I want using mongo / javascript or do I have to include the field on insert https://gist.github.com/3532063
[16:18:48] <thomasthomas> Anyone?
[16:23:06] <remonvv> Hi guys. A question with some urgency. We added two shards to a cluster but it refuses to move chunks there. Nothing in the logs.
[16:36:38] <remonvv> "moveChunk failed to engage TO-shard in the data transfer: migrate already in progress"
[16:41:44] <ajsharp> Is it possible to atomically update embedded arrays? Basically, I'm trying to model an Inbox, and I want the inbox all kept in a single document
[16:42:25] <ajsharp> {user_id: 1, conversations: [ {id: 2, messages: [{id: 3, body: 'this is the message body'}] } ]}
[16:42:45] <ajsharp> I want to perform atomic push operations on a particular conversation's messages array
[16:43:07] <ajsharp> I'm trying this, but getting unexpected results: db.inbox.update({user_id: 1, 'conversations.id': 2}, {$addToSet: {conversations: {messages: {id: 4, body: 'hey'}}}})
[16:55:13] <kzoo> Which lucene query method should I be using to search for a particular ObjectId? IdsQuery() seems to use the _uid attribute and not the document _id
[16:55:21] <xico> Hello Mongos :)
[16:55:24] <xico> Does anyone use Mongoskin and DBRefs with MongoDB?
[16:55:28] <xico> and Node.ks
[16:55:33] <xico> *js of course
[16:56:01] <xico> I can't find a better place to ask this :\
[17:03:55] <addisonj> xico, nope, but I do lots of stuff with node and mongo
[17:04:02] <addisonj> but mostly use mongooose
[17:09:18] <Gargoyle> Dunno what I did on box number 2 to make it work, but I have had to copy the mongo.so file over.
[17:09:52] <Gargoyle> deleteing the driver dir, and recloning branch v1.2 still didnt work. Both servers are running the same software. #mystery!
[17:16:47] <Dr{Who}> Q. is it possible to dump the mongo journel simmilar to the mysql replog I need to find when a query was run and hopefully more info on the query.
[17:18:17] <Gargoyle> I've just done a make clean and ./configure on both and am getting a diff.
[17:18:19] <Gargoyle> < checking whether to include code coverage symbols... no
[17:24:39] <remonvv> WARNING: 2.2 shards cannot be added to a 2.0 cluster as opposed to what is advertised in changelog.
[17:51:48] <estebistec> I don't see the new expireAfterSeconds feature of ensure_index documented for pymongo. Safe to assume the kwarg is expire_after_seconds?
[18:12:03] <Jester01> hey folks, I have a collection where Id is a guid and it has its index. For paging, I am using skip and sort on that id, but it's slow. am I doing something wrong?
[18:13:00] <Jester01> and log says "nscanned" includes all the skipped records too
[18:13:05] <kali> skip() is O(n)
[18:13:13] <Jester01> even with an index?
[18:13:16] <kali> yes
[18:13:34] <kali> mmmm maybe not with 2.2
[18:13:44] <kali> are you usng 2.0 or 2.2 ?
[18:13:58] <Jester01> 2.0
[18:13:59] <kali> i don't know if the counted btree made the cut for 2.2
[18:14:24] <Jester01> I will test 2.2 on my dev machine then, thanks for the info
[18:14:43] <Jester01> any suggested "best practice" instead of skip?
[18:15:00] <Jester01> how about id > last_in_prev_branch ?
[18:15:01] <kali> use the actual values
[18:15:03] <kali> yes
[18:15:50] <Jester01> unfortunately that involves some "client side" changes, as I only get a page number now
[18:16:06] <Jester01> I will see what I can do
[18:16:08] <Jester01> thanks again
[18:45:49] <jarrod> i cant figure out how to impose per-db quotas
[18:49:52] <kgraham> hello
[18:50:24] <kgraham> i recently had to replace a shard (one shard in a three shard clusteR)
[18:50:41] <kgraham> I copied over the data to the new server, repointed dns, at the new server and now i get this error
[18:50:43] <kgraham> printShardingStatus: not a shard db!
[18:50:51] <kgraham> shardsvr is true, not sure how to fix this
[18:50:59] <kgraham> any help would be greatly appreciated
[18:51:52] <kgraham> the other two 'db.printShardingStatus()' show correrct information
[19:01:47] <dstorrs> kgraham: does the new shard box know where the config servers are?
[19:04:05] <dstorrs> question for the room: I've got ~1T of data currently split between two shard boxes. I'm about to add a third. Is there any way that I can get an educated guess on how long the re-sharding process will take?
[19:04:29] <kgraham> dstorrs: thank you, how do i check this?
[19:04:36] <kgraham> i am new to mongodb
[19:04:47] <dstorrs> one sec, let me look it up.
[19:04:50] <kgraham> thank you
[19:08:59] <dstorrs> in your mongos.conf file, look for a line that says configdb=dbconf1:27019,dbconf2:27019,dbconf3:27019 (or whatever)
[19:09:03] <kgraham> ec2-75-101-248-186.compute-1.amazonaws.com
[19:09:07] <kgraham> oops
[19:09:21] <kgraham> yeah all that is correct
[19:09:22] <dstorrs> dbconf1-3 should be DNS-addressable names
[19:09:23] <kgraham> here's what happened
[19:09:37] <kgraham> we have three mongo'd' servers
[19:09:43] <kgraham> each a shard of a larger collection
[19:09:52] <kgraham> we have three config servers, and many mongos running on app servers
[19:10:03] <kgraham> one of the three mongod servers started getting weird
[19:10:14] <kgraham> so we booted up a new server, copied the data over, and changed the dns to the new server
[19:10:28] <kgraham> we restarted all the config servers, mongos , and mongod processes
[19:10:45] <kgraham> two of the three show correct data for printshardinfo ; the new one does not
[19:10:46] <dstorrs> hm.
[19:11:00] <kgraham> printShardingStatus: not a shard db!
[19:11:14] <kgraham> on the other two, it shows expexted info
[19:11:28] <dstorrs> it's *possible* that the new shard is still in the process of getting caught up, and it will start reporting as a shard when it's ready.
[19:11:31] <kgraham> the mongos' processare re routing traffic to all them
[19:11:38] <kgraham> how do i test this
[19:11:46] <kgraham> it was over two hours ago, we weren't down that loing
[19:12:06] <dstorrs> also, I don't believe the order in which you bring up the mongod / s / config, but it might.
[19:12:26] <dstorrs> scuse me, irl issue. back shortly.
[19:12:30] <kgraham> ok
[19:23:32] <dstorrs> back
[19:24:02] <dstorrs> anyone know how to estimate how long it will take to re-shard 1T of data when a new box is added?
[19:25:56] <dstorrs> kgraham: I'm fairly new to Mongo myself, and have only set up sharding once. We did get more or less the same situation you're in, and it resolved itself after a while.
[19:26:16] <dstorrs> Aside from that and the config server thing, I'm not sure what to recommend. Sorry. :<
[19:26:36] <kgraham> heh no problem
[19:26:42] <kgraham> 'resolved itself' i hope
[19:26:43] <kgraham> :)
[19:29:05] <greeeeps> I am inserting with Java driver (5millions of rows). 1 thread is faster than 2 threads. I think this is not what we expect ?
[19:29:22] <greeeeps> btw : one Mongo() object per Thread
[19:39:15] <kali> greeeeps: try with only one object, this is the way it is intended to be used
[19:39:40] <kali> greeeeps: http://www.mongodb.org/display/DOCS/Java+Driver+Concurrency
[19:41:53] <hdm> greeeeps: also check your tps with iostat, concurrency hurts performance once you max out iops
[19:47:15] <greeeeps> i will make it one Mongo instance
[19:47:32] <greeeeps> iostats for a possibility of IO bottleneck ?
[19:52:14] <eka> hi all... did someone tried TTL with pymongo? can't make it work though
[20:11:56] <dabar> Hello. I would like to know whether MongoDB would help me with producing excerpts around matched content in searches with LIKE '%%' and LIKE '%%"
[20:12:37] <dabar> I am using a preg_match with PHP at the moment, and it is too slow when using more than 4 keywords cause I have to account for permutations unlike in LIKE AND LIKE...
[20:43:33] <hydester> any recommended Linux query tools recommended for working with complex documents? i use the console which is sufficient for some things. but a richer gui/console could have its advantages. i know about http://www.mongodb.org/display/DOCS/Admin+UIs, but i was hoping for some personal recommendations.
[20:53:40] <mog_> Hi there, I'm using mongodb with the native node.js driver and I'm witnessing something that surprises me: during a find() query, after the cursor is exhausted thanks to the each function, a last call is made with the value null, and it seems to idle afterwards. Any idea if it's normal or not?
[21:53:56] <Almindor> if you have documents with an array of ints which can be empty in ~1/3 chance, is it better to use null/undefined for the field or [] if you search in the array (and the collection is quite large)
[23:13:38] <dstorrs> I have just now added a new shard box to my cluster. db.printShardingStatus() lists it under the 'shards' section at top, but not under the 'databases' section below. I think this is simply because data has now moved over yet. How would I confirm this?
[23:30:19] <thomasthomas> I need to batch insert an array of objects as documents via node & mongolian any ideas?
[23:36:10] <dstorrs> if anyone can answer this sharding question, it would be much appreciated: http://stackoverflow.com/questions/12207059/mongodb-new-shard-appears-but-is-not-showing-content-is-this-expected
[23:43:38] <thedahv> Hello everybody. Does the ruby driver support populating linked fields?
[23:44:16] <thedahv> For example, I can do "Model.findOne({id: something}).populate('linked_field')" to get the full linked document in the returned object
[23:44:59] <thedahv> Maybe it's called something else in the ruby driver