PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 14th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:33] <heoa> db.test.findOne() -- works but over just one db.
[01:14:08] <heoa> Is there some command to find db url?
[01:15:56] <joshfinnie> quick question, can you mongodump to CSV file?
[01:17:33] <joshfinnie> nvm, found it. `--csv` and not `--type csv`
[01:34:34] <heoa> Which book would you suggest about MongoDB? Is the O'Reilly book good for beginner?
[01:37:26] <joshfinnie> heoa, what language are you interacting with MongoDB in?
[01:38:12] <heoa> joshfinnie: javascript (or python) -- haven't yet got anything really working so thought to read some book...
[01:39:21] <joshfinnie> heoa, I have read through this one: http://shop.oreilly.com/product/0636920021513.do (Python) with some success
[01:39:28] <joshfinnie> what are you having issues with?
[01:41:01] <heoa> too much material, trying to find some simple demo with: give stuff to some HTML field, store it to mongoDB, alert( get the data from mongoDB), show stuff in HTML markup, etc -- now puzzled by mongoDB, mongoJS, expect, node, nodeJS etc
[01:43:45] <heoa> For markup, I can use jQote.
[01:58:33] <heoa> http://shop.oreilly.com/product/0636920021513.do <--- found some book about M and P.
[02:00:23] <joshfinnie> heoa, that's the one I recommended above. it's short and sweet
[02:00:50] <joshfinnie> Node.js and Mongodb work well together, and I have just been starting to toy with it...
[02:02:47] <heoa> joshfinnie: have you read other books?
[02:03:13] <heoa> because oreilly has a lot of mongodb -books easily acchieved from google but this one is not.
[02:03:40] <joshfinnie> no, I found that mongodb.org is very helpful, the python api is very close to the examples online and it all I needed
[02:03:52] <joshfinnie> i.e. http://www.mongodb.org/display/DOCS/Advanced+Queries
[02:04:26] <multi_HYP> night all
[02:12:39] <heoa> joshfinnie: suppose I open console in Google Chrome, how can I execute "db.things.insert({colors: ["yellow"]})
[02:12:46] <heoa> ?
[02:12:58] <heoa> (db object not defined)
[02:13:54] <heoa> How can I initialize the db -object in a browser?
[02:20:51] <joshfinnie> heoa, not sure. I am really green when it comes to mongodb and js. I am sure there might be someone here to better help you, unfortunately.
[02:37:32] <dstorrs> holy crap findAndModify is slow. 30_000 entries in collection, indexed. I did 30k iterations of the same FaM in a loop and I had time to go get water, visit the head, and it's not done yet.
[02:38:09] <dstorrs> actually, no, that was only 10k iterations it was trying for.
[04:45:00] <zackattack> So, while typing this command, I screwed up and somehow lost my database. What should I do? db.copyDatabase('log-p', 'logp', 'localhost')
[04:46:12] <zackattack> http://pastie.org/4084287
[04:47:49] <webjoe> stupid question, did you run regular backups?
[04:47:56] <webjoe> http://www.mongodb.org/display/DOCS/Backups
[04:50:08] <zackattack> Um....like a moron, no.
[04:50:17] <zackattack> I'm kicking myself.
[04:53:50] <zackattack> So...am i screwed?
[04:56:00] <zackattack> webjoe: i went into my server and found this...
[04:56:02] <zackattack> http://pastie.org/4084329
[04:58:39] <webjoe> uh, i don't think your db is gone
[04:58:42] <webjoe> are you just not able to connect?
[05:01:50] <zackattack> i'm not able to connect
[05:02:22] <webjoe> did you check if mongod is running in your process?
[05:02:59] <zackattack> it's not, and when i try to start it up, it looks for data in /data/db
[05:03:05] <zackattack> which does not exist..
[05:03:32] <tjmehta> just create the dir
[05:04:11] <tjmehta> or use a config option to specify a dir of your choice
[05:04:17] <webjoe> --dbpath
[05:04:22] <zackattack> how do i run it in the background?
[05:04:34] <zackattack> will mongod & work?
[05:04:46] <webjoe> --fork
[05:04:48] <webjoe> yea
[05:05:07] <webjoe> Running as a Daemon
[05:05:07] <webjoe> Note: these options are only available in MongoDB version 1.1 and later.
[05:05:08] <webjoe> This will fork the Mongo server and redirect its output to a logfile. As with --dbpath, you must create the log path yourself, Mongo will not create parent directories for you.
[05:05:09] <webjoe> $ ./mongod --fork --logpath /var/log/mongodb.log --logappend
[05:05:11] <webjoe> oops.
[05:05:12] <zackattack> awesome.
[05:05:13] <webjoe> http://www.mongodb.org/display/DOCS/Starting+and+Stopping+Mongo#StartingandStoppingMongo-RunningasaDaemon
[05:05:14] <zackattack> thanks!
[05:05:34] <webjoe> No problem.
[05:05:55] <zackattack> Any recommendations on a webgui for paging through collections?
[05:06:39] <webjoe> there's a few options...
[05:06:41] <webjoe> let me dig.
[05:07:07] <webjoe> http://www.mongodb.org/display/DOCS/Admin+UIs
[05:07:25] <webjoe> http://www.quora.com/MongoDB/What-is-the-most-popular-MongoDB-admin-GUI
[05:07:39] <zackattack> thank you so much
[05:07:40] <zackattack> whew
[05:07:49] <webjoe> ;)
[05:08:17] <zackattack> $4.25/month...jesus
[05:08:44] <webjoe> what's $4.25
[05:09:16] <zackattack> linode monthly backup cost
[05:09:47] <wereHamster> that's less than a starbucks coffee. Or a cigarette pack. Or two chewing gum packs.
[05:10:47] <zackattack> yeah, christ almighty.
[05:22:42] <Kane`> in the mongodb shell, if i do: `result = db.collection.find()` how can i then do something like: `db.foobar.find({field: {'$in': result}})` ?
[05:40:06] <dstorrs> just confirming -- 'update' is atomic between its "match" and "do the update" steps, right?
[05:52:29] <kuzushi> im sorry to be a bit of a bum-- but does mongodb have some default logging so I can see queries that are being executed?
[05:54:32] <kuzushi> nm
[05:56:38] <carsten> this stinking privacy violating bot is still there? kick this crap out
[06:21:43] <heoa> Which driver would you use to create only-javascript-based application?
[06:21:46] <heoa> http://www.mongodb.org/display/DOCS/Javascript+Language+Center
[06:23:03] <heoa> I am not sure whether I should go with Node.js or Narwhal, ideas?
[06:23:24] <boll> Node.js is incredibly not ready for prime time
[06:23:36] <boll> in my opinion of cours
[06:23:37] <boll> e
[06:24:40] <heoa> Yes but Narwhal has much less followers than node.js -based driver?!
[06:26:39] <heoa> ...Node.js driver is supported by the 10gen while Narwhal not.
[06:32:42] <heoa> boll: I could not fully understand your "incredibly not ready", is there some better driver?
[06:32:57] <heoa> (I personally like Python so should I go with it?)
[06:33:07] <heoa> ...never done anything with node.js...
[06:36:31] <carsten> stupid discussions...
[06:43:23] <boll> heoa: I was actually referring to Node.js (the server/framework) rathern than the driver
[06:43:39] <boll> we've been experimenting with it, and wow, does it crash a lot
[06:45:49] <heoa> boll: I see I am going with Narwhal, misread things, it is based on Java -driver that has more support -- anyway, probably good enough, testing.
[06:48:20] <heoa> Cannot yet understand how to execute the code there https://github.com/sergi/narwhal-mongodb/blob/master/tests/DBTest.js <--- is node/mongo/mongod/xyz or something else meant for the execution?
[06:50:56] <heoa> I need some interpreter to do it, something on console?
[06:54:43] <ro_st> any special considerations when storing ObjectIds from a different database?
[06:59:32] <carsten> ro_st: please?
[07:00:16] <ro_st> um, please :)
[07:00:33] <carsten> http://www.catb.org/~esr/faqs/smart-questions.html first
[07:03:31] <ro_st> i have a single database with customers, products, posts, comments etc. then i have an inordinately large collection, analytics. i want to put that into a separate database so that i can make working with the rest of the database easy (in terms of fetching dumps to my development box, stuff like that). however, the analytics collection stores object id's of both customers and products (as it records actions taken by customers on products)
[07:05:52] <ro_st> it's just a sanity-check with more experienced folks before i pull the trigger :-)
[07:36:45] <[AD]Turbo> hola
[07:42:08] <heoa> Why does "var MongoDB = require("mongodb"); var db = new MongoDB.Mongo().getDB("mydb");" fire http://pastie.org/4084839 ?
[07:42:33] <heoa> Some problem to access the DB?
[07:43:13] <heoa> Just created the db with "$ mongo; > db.mydb.insert({name: 'Hola'})
[07:43:57] <heoa> I installed the node.js driver with "$ npm instal mongodb".
[07:44:25] <heoa> The err report is not much informative, some suggestions?
[08:59:49] <horseT> Hi
[09:00:42] <horseT> Is there a solution to force read on secondary using php driver ?
[09:00:59] <carsten> connect directly to the secondary
[09:01:08] <Derick> you can connect to it directly without specifying a replicaset connection
[09:01:12] <Derick> and, use slaveOkay()
[09:02:11] <sylvinus> with the node driver, how to convert a json string (with {$oid}s) to a query object with ObjectId()s ?
[09:09:43] <horseT> Derick: using slaveOkay phpdriver determine the node using ping, it's not a solution. If I directly connect to the secondary I lost the failover.
[09:10:25] <Derick> no, slaveOkay on a non-replicaset activated connection will just work
[09:10:31] <Derick> you need to call it on the cursor object
[09:44:29] <horseT> Derick: What do you mean by "non-replicaset activated connection" ?
[09:47:09] <horseT> slaveOkay has no sense without replicatSet
[09:52:53] <Derick> horseT: if you do: new Mongo("secondary"); it will not use a replicaset (as you don't specify array( 'replicaSet' => 'name' ) ;
[09:53:03] <Derick> you will still have to use slaveOkay on the find:
[09:53:17] <Derick> $c->find()->slaveOkay(); in order to do queries on it
[09:57:22] <millun> hi
[09:59:00] <millun> just wanna make sure: @Entity("animals") Cat extends Animal { ... } - if i wanted to have ds.find(Animal.class) in one DAO class, and have List<? extends Animal> filled with Cat.class, or Dog.class I wouldn't be able to mix it into 1 DAO file, right? i have to create 3 DAO's
[10:00:24] <millun> am I correct?
[10:00:44] <NodeX> consult your wrapper docs for that one
[10:01:20] <millun> no mention of it for morphia, but i see your point
[10:01:57] <carsten> who needs such DAO overhead?
[10:02:30] <spillere> does anyone have a example on running a FOR to populate an subitem on a db in python? like {'name':'daniel', 'sub': [{'n': 'lo'},{'n': 'la'}]}
[10:02:32] <millun> carsten: how would you do it?
[10:02:55] <millun> i'd like to have 1 dao, of course
[10:04:53] <carsten> native query syntax
[10:06:32] <millun> carsten: pardon my noobness, but i still would have to use "ds.createQuery(Animal.class)" which would result in getting only Animals, not extended Cats or Dogs?
[10:07:49] <carsten> go away with your ODM or DAO mappers or whatever ...try it with the native query api and understand what you are doing first instead of building on high-level patters without knowing mongodb
[10:09:06] <millun> doh, right. i'll rtfm. cheers
[10:09:42] <millun> didn't realize.. sorry
[10:10:37] <horseT> Derick: Thanks, I understand know :). But the solution remove the failover.
[10:43:02] <spillere> I have a dict like this: dic = {'name': 'daniel', 'checkins':[{'ckName': 'Mc', 'ckId': 1}, {'ckName': 'Bk', 'ckId': 2}]}, how do I add a new item to checkins?
[10:43:36] <NodeX> $push / $addToSet
[10:44:20] <NodeX> $addToSet if you dont want dupes , $push if you don't care about dupes
[10:46:21] <spillere> ty
[11:06:23] <sepehr> Hi guys, I imported a mysql table into mongodb as a collection which already has ObjectIds (varchar). How can I alter the type to ObjectId? (varchar => ObjectId)
[11:07:25] <carsten> object ids are immutable
[11:10:04] <sepehr> That's not a ObjectId yet, it's a varchat refrencing documents in a separate collection
[11:10:30] <carsten> how to change? write a script doing the migration...
[11:10:38] <sepehr> carsten: you mean that I cannot change varchar type to ObjectIDs?
[11:11:04] <NodeX> not en masse
[11:11:11] <carsten> write a script as said
[11:11:18] <NodeX> you'll have to take care of it in your app
[11:11:21] <sepehr> carsten: ya right, thank you very much ;)
[11:11:24] <carsten> as you would do it with other every database the same way
[11:11:26] <NodeX> s/app/importer
[11:32:08] <icedstitch> o/
[12:30:29] <Ilja> hello
[12:30:54] <Ilja> i have a question about functionality of mongodb
[12:31:14] <trypsin> hello
[12:31:38] <trypsin> i want to turn off the bind_ip option on my database
[12:32:51] <trypsin> anybody here?
[12:33:08] <trypsin> hello?
[12:33:13] <yakov> hey!
[12:33:54] <yakov> i'm reading FAQ. what does it mean that applications requiring multi-object commit with rollback aren’t feasible.
[12:33:55] <trypsin> how can i turn off the bind_ip option on my database/
[12:34:05] <trypsin> ?
[12:34:38] <trypsin> I don't understand
[12:35:31] <carsten> trysion: turn what off?
[12:35:49] <trypsin> the option: bind_ip
[12:36:03] <carsten> what do you want to turn here off?
[12:36:10] <trypsin> it is now bound to 127.0.0.1
[12:36:14] <carsten> and?
[12:36:41] <trypsin> if that way I can't access my database remotely
[12:37:02] <carsten> then specify the related public ip or 0.0.0.0
[12:37:18] <trypsin> just 0.0.0.0 is ok?
[12:37:44] <carsten> do you have a serious idea about networking?
[12:37:45] <trypsin> I'll try it
[12:38:50] <trypsin> nope, I only need my software to access the database from another computer
[12:40:00] <trypsin> I'm sorry, but which command should I use to set the bind_ip 0.0.0.0?
[12:40:41] <carsten> mongod --bind_ip <ip>
[12:41:21] <carsten> mongod --help
[12:44:02] <NodeX> the default is to listen on all I think#
[12:44:33] <NodeX> netstat -pan | grep mongo
[12:44:43] <NodeX> tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 17497/mongod
[12:48:56] <trypsin> ths
[12:55:37] <icedstitch> Hey did you know..... Reduce can be re-reduced?
[12:55:50] <icedstitch> I just rtfm'd :)
[12:56:55] <icedstitch> I just have to figger out how to do complex mrs
[13:21:50] <philnate> hi
[13:22:29] <philnate> can I somehow do a listDatabase without authentication, while auth is turned on?
[13:23:50] <philnate> actually my java app throws me an error that I need to login, but I may not know to authenticate against which db in the first place
[13:33:13] <philnate> am I able to use listDatabase at all as soon as auth is activated? Or can I use it only if I'm authenticated against the admin database?
[14:07:57] <multi_io> what's the conceptual difference between the update command and the findAndModify command?
[14:08:18] <multi_io> looks like the latter is just a more sophisticated version of the former?
[14:08:50] <multi_io> i.e. update also finds a modifies documents.
[14:08:52] <rick446> findAndModify returns the modified document
[14:09:01] <rick446> (or the document before modification)
[14:09:03] <rick446> update doesn't
[14:10:23] <multi_io> hm, ok
[14:18:34] <zdunn> Anyone have experience with the 10gen Chef Cookbook? In particular, sharding and replicasets ?
[14:18:52] <zdunn> I am having a good bit of trouble getting things to act as I would expect
[14:19:04] <zdunn> I have 9 Mongods that I am trying to run in three shards
[14:19:26] <zdunn> I have three roles (shard(1-3)) which have the shard and replicatset recipes in them
[14:19:54] <zdunn> I set the default_attributes in each role to be the shard_name and replicaset_name
[14:20:13] <zdunn> but only shard one starts a "shard" service
[14:20:42] <zdunn> the other two start the standard mongod service but with the correct --replSet defined
[14:20:53] <zdunn> the shard1 just gets the default --replSet rs_shard1
[14:26:30] <philnate> multi_io: findAndModify is synchronous, while update per default isn't. Further findAndModify can only edit one document at most while update can multi update
[14:28:00] <philnate> zdunn: not sure if I understand you correctly but each node has to be told to what replSet it belongs
[14:28:14] <zdunn> sure,
[14:29:54] <icedstitch> anybody have knowlege on awesome links mongodb capable MR methods beginner and advanced?
[14:30:16] <icedstitch> or patterns, that is. I noticed this article: http://highlyscalable.wordpress.com/2012/02/01/mapreduce-patterns/
[14:30:39] <zdunn> and I am defining that in the override_attributes
[14:31:04] <kali> icedstitch: this is for hadoop. mongodb implementation is quite different
[14:31:13] <zdunn> phira: the issue seems to be that on shard1 the shared recipe is run, but the override_attribute is not honored
[14:31:27] <zdunn> while on shards 2 and 3 the shard recipe is never run
[14:31:32] <zdunn> so the mongod default stays in place
[14:31:43] <zdunn> BUT that recipe DOES have the correct override values
[14:32:37] <icedstitch> kali: I figure in "theory" mongo'd be able to handle it.
[14:33:09] <icedstitch> that's also why i'm pinging your minds for advanced mongo mr stuffs
[14:33:52] <icedstitch> found one of my implementations isn't working like i wanted it to, all based on reduce being iterative.
[14:33:58] <kali> icedstitch: more or less... the basic idea is the same, but map reduce is definitely not mongodb central feature
[14:34:10] <icedstitch> so it got me to thinking on simplifying my work
[14:34:21] <icedstitch> oh
[14:34:35] <icedstitch> eeeenteresting. Ok
[14:35:58] <kali> icedstitch: mongodb is optimized for small latency access, hadoop map/reduce for efficient batch processing
[14:36:25] <kali> icedstitch: mongodb is a usefull tool, but not the reason why somebody should use mongodb
[14:36:35] <kali> i meant mongodb map/reduce
[14:38:04] <icedstitch> i was landed with mongodb in my lap, taking over for someone else who left the project in my hands. It's being used as a huge storage mechanism, I've considered using the map/reduce in some aspects to speed up the calculations.
[14:38:58] <skeeved> doing an upsert with modifiers and non-modifiers is not allowed?
[14:38:59] <kali> icedstitch: be aware you can only have one m/r job running at a given time
[14:39:09] <kali> skeeved: nope. but you can use the $set modifier
[14:39:45] <skeeved> kali: that makes sense, thanks
[14:42:00] <icedstitch> right, meaning that the 2nd m/r job has to wait for the lock to be cleared
[14:42:16] <icedstitch> as how i've read the docs, is that right?
[14:42:50] <kali> icedstitch: i'm not sure if the second waits or if they intersped, but yeah, this is the basic idea
[14:55:10] <icedstitch> kali: much thanks. Are you aware of what the typical stable release cycle is with 10gen and the mongo server core?
[14:58:49] <Killerguy> is it possible to force removing shard?
[14:58:58] <Killerguy> because mine is stuck in draining status
[15:34:16] <Killerguy> <Killerguy> is it possible to force removing shard?
[15:34:16] <Killerguy> <Killerguy> because mine is stuck in draining status
[15:34:18] <Killerguy> :)
[17:33:52] <BobFunk> gmm, is there really no way to rename a database?
[17:34:01] <BobFunk> is the only way still to do a copy database?
[17:34:31] <rick446> you could stop mongod and rename the files, probably
[17:34:42] <BobFunk> ahh - if that works then perfect
[17:35:05] <rick446> (untested of course — that's why I said 'probably' ;-) )
[17:35:26] <BobFunk> hehe, well, just playing around with restoring a production backup on a dev machine
[17:35:56] <BobFunk> and using rails - so have the normal convention of <appname>-production / <appname>-development db naming
[17:36:13] <BobFunk> so being able to rename the db after restorint it would be useful
[17:38:37] <BobFunk> worth a try
[17:39:09] <BobFunk> hmm, nopes
[17:40:11] <BobFunk> that doesn't work :/
[18:33:19] <zirpu> anyone know of a good way to determine optimal batch size for inserts? i'm parsing and importing logs, trying to make it go faster.
[18:33:35] <zirpu> faster w/o overrunning the oplog size though.
[19:04:02] <Ilja> hi, is MongoDB supports archivating data on-line?
[19:48:42] <gustav_> Question: We have a sharded setup and running a sum to find the number of embedded documents returns the wrong result. It appears to be returning the sum based off of only a single shard (we have 3). Is there a way to get around this?
[20:03:43] <multiHYP> hi
[20:07:42] <skot> gustav_: can you post your commands/sum and the results to gist/pastie/etc?
[20:07:53] <skot> zirpu: do you have mongostat and iostat numbers?
[20:08:19] <skot> Ilja: you mean like attaching database files to a running server?
[20:17:49] <gustav_> skot: https://gist.github.com/2932640
[20:40:30] <zirpu> skot: some mongostat numbers. http://pastie.org/4088373
[20:41:02] <zirpu> i've reduced the insert batches to 100 from 10k and 1k. seems to not have made much of a difference.
[20:46:10] <zirpu> http://pastie.org/4088407 some iostat numbers from the primary.
[21:08:18] <skot> zirpu: can you install munin-node and enable MMS to use it to collect hardware stats on your servers?
[21:08:31] <dstorrs> hey all. I'm trying to wrap my head around 'update' + '$elemMatch' and having little success. I tried this: > db.jobs_harvest_Video.update({pages: {"$elemMatch": {lock_until: 0}}}, { $set : { lock_until:10} })
[21:08:39] <dstorrs> The documents are here: http://pastie.org/4088513
[21:08:53] <skot> http://mms.10gen.com/help/install.html#hardware-monitoring-with-munin-node
[21:09:02] <dstorrs> is my syntax wrong, or does update simply not accept $elemMatch ?
[21:09:23] <skot> dstorrs: yes, the syntax is incorect.
[21:09:35] <dstorrs> oh thank FSM.
[21:09:39] <skot> Can you post a sample doc to gist/pastie?
[21:09:48] <dstorrs> http://pastie.org/4088513
[21:10:28] <dstorrs> I'm trying to end up with the entry for page 1 of 'bob's array set to 'lock_until:10' and everything else the same
[21:10:34] <skot> $elemMatch is only needed when you have more than one field in the array item you want to match
[21:11:49] <skot> you want to use the positional operator ($) to do the update
[21:12:41] <skot> like this: db.coll.update({"pages.locks_until":0}, {$set:{"pages.$.locks_until":10}})
[21:12:47] <dstorrs> I was trying to use elemMatch because there may end up being additional criteria
[21:13:03] <dstorrs> d'oh.
[21:13:11] <dstorrs> of course. That's so simple. Thanks, skot
[21:13:12] <skot> then change the query to use $elemMatch with multiple criteria in my example
[21:13:33] <skot> np
[21:13:47] <skot> here are the docs with more examples : http://www.mongodb.org/display/DOCS/Updating#Updating-The%24positionaloperator
[21:14:25] <skot> zirpu: it looks like from your iostat numbers you are disk bound with 100% utiliztion
[21:14:25] <dstorrs> so, like this? db.coll.update({"pages" : { $elemMatch : {locks_until:0, action : 'process'}}, {$set:{"pages.$.locks_until":10}})
[21:15:00] <skot> yep
[21:15:06] <dstorrs> sweet. Thanks.
[21:15:59] <dstorrs> is there any reason /not/ to use $elemMatch if there's only one criteria?
[21:16:04] <dstorrs> is it slower, or etc?
[22:16:39] <tystr> I have db.coll.ensureIndex({ "attributes.k" : 1, "attributes.v" : 1 });
[22:16:59] <tystr> with ~10000 key-value pairs inside attributes,
[22:17:40] <tystr> db.coll.update({ "_id": ObjectId("4fda5ebcab3c45324a000005") }, { "$pushAll": { "attributes": [ { "k": "some_key", "v": "some_value" } ] } });
[22:17:47] <tystr> takes over 10 seconds
[22:17:56] <tystr> is this normal??
[22:31:33] <tystr> without the index, the update is practically instant
[22:54:23] <themoebius> I have a database with a lot of deleted data and I would like to reclaim the disk space that was taken up. I understand the only way to do this is with a repairDatabase() but it requires enough free space to hold the entire NEW database as well as the old. I don't have enough. What are my options?
[22:55:57] <mediocretes> if you can attach more disk, you can repair into a seperate directory
[22:56:56] <mediocretes> if you have a replica set, you can resync
[23:01:30] <Kage`> Stupid question... Anyone know of any PHP+MongoDB forums systems?
[23:24:20] <dstorrs> I have a document that looks like this: { _id : 'foo', pages : [ {n : 1, owner : 'me'}, {n:2, owner:'you'} ] }. I would like to retrieve just the {n:2, owner:'you'} embedded doc. Is there a way to do that?
[23:24:40] <dstorrs> I feel like I should be able to make this work, but I don't quite grok embedded docs yet.
[23:27:55] <themoebius> it seems like when mongodb is estimating the space needed for a repair it uses the current size on disk, not the space actually needed fi I have deleted most of the data. This is a problem if I can't add a drive bigger than the one I already have.