PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 3rd of July, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:36:26] <tanner> is there a way to do a two node replica set without an arbiter?
[00:52:30] <cyberfart> I was doing some testing with 2.0.0 32Bit and went over the storage limit. The problem is, now I can't do much with it except find queries. Is that normal? I can't even drop a database
[01:03:30] <tystr> when using the query option with map reduce, does it just issue that query? i.e. will indexes be used for that query?
[01:10:06] <WormDrink> anybody using mongodb with data=journal on ext{3,4} ?
[02:08:13] <linsys> WormDrink: Yes, ext4
[02:08:27] <linsys> I"ve also ran it on ext3
[02:08:49] <WormDrink> and, how is it ? nice and slow ?
[02:09:09] <linsys> No its fine... why would you think its slow?
[02:09:30] <linsys> ext4 has an advantage over 3 because of how ext4 deals with the preallocation of files
[02:09:47] <WormDrink> do you run with mongodb journaling also ?
[02:09:48] <linsys> with 3 you experience a small delay/lagg when the mongodb has to grow a new 2g data file
[02:09:52] <linsys> yes
[02:10:03] <linsys> I generally run each on their own FS
[02:10:13] <linsys> well own "LUN"
[02:11:30] <linsys> I just ran a test in EC2 there is about a 25-35% performance degredation with journaling on vs off
[02:11:34] <WormDrink> wont data=journal double the writes ?
[02:13:44] <linsys> maybe I am confused by your question, do you mean with nojournal = false /etc/mongodb.conf
[02:15:09] <WormDrink> no
[02:15:29] <WormDrink> data=journal on filesystem
[02:15:41] <linsys> oh
[02:15:49] <linsys> thought you where talking mongodb specific item sorry
[02:15:59] <WormDrink> http://kernel.org/doc/Documentation/filesystems/ext4.txt
[02:16:35] <linsys> yea I get it now
[02:16:43] <WormDrink> data=journal All data are committed into the journal prior to being written into the main file system.
[02:17:19] <WormDrink> i figure with that on + mono journaling = sux performance
[02:18:15] <linsys> probably
[02:44:19] <JoeyJoeJo> How can I return just one field in a find?
[03:40:55] <JoeyJoeJo> Is it possible to have a collection of collections?
[07:00:13] <ndee> hi there, is following possible: I have a collection and would like to search for a document. The more criterias match, the higher should the documents "rank". Is that directly possible or would I need a programatical way to do that?
[07:01:34] <NodeX> that's an application side problem
[07:01:36] <wereHamster> ndee: you can map/reduce and then search in the resulting collection
[07:04:42] <ndee> ok, thanks for the input :)
[07:22:12] <[AD]Turbo> hola
[09:12:34] <[AD]Turbo> is it possible to user .sort(...) on a findOne query? I tried db.ExchangesSell.findOne({ 'price': { $gte: 2 } }).sort({ 'price': 1 }) in the mongo shell, but received an error (Tue Jul 3 11:09:21 TypeError: db.ExchangesSell.findOne({price:{$gte:2}}).sort is not a function (shell):1)
[09:15:59] <neil__g> probably have to do a find().sort().limit(1)
[09:18:40] <[AD]Turbo> thx neil__g
[10:32:28] <imran> is http://www.mongodb.org/ down for everyone ?
[10:34:24] <Derick> it's being looked at
[10:38:45] <imran> is there a mirror we can grab the mongodb downloads from in the meanwhile ?
[10:40:27] <adamcom> I've woken up one of the admins, should be back up shortly
[10:41:01] <Derick> imran: that page works for me: http://www.mongodb.org/downloads
[10:41:04] <adamcom> actually, working for me now
[10:41:17] <Derick> it seems to be all working again now
[10:43:08] <imran> ta
[10:46:26] <rs_> Hi all, does anyone translate strings / database content using mongodb? ...and if so, can they share the process they used for storing the translations and calling those translations?
[10:49:19] <neil__g> i'd have a collection per language
[10:49:34] <neil__g> and the string is the key
[10:49:57] <neil__g> then you can point your app at a particular collection and query it in the same wa
[10:49:57] <neil__g> y
[10:52:46] <NodeX> I would do it with a document per word/phrase and an array of versions of it
[10:53:21] <deoxxa> i would tell my users to learn english or get off my website this is america gosh darn it
[10:53:32] <neil__g> no it ain't :)
[10:53:50] <deoxxa> YEE-HAW
[10:54:52] <rs_> Based on creating a new collection per language, how would I fallback to 'en' if a certain translation didn't exist?
[10:55:00] <neil__g> i suppose @NodeX's way would make it easier to add new strings
[10:55:09] <neil__g> with my route you'd have to add an entry to multiple collections
[10:55:16] <wereHamster> rs_: not a mongodb question
[10:56:27] <NodeX> rs_ : despite not being a mongo question you can do it like this... (on a per word basis like I suggested) ... in your app use !empty(lang) else en
[10:56:36] <neil__g> Can anyone give me a practical reason why not to store prices as floats in mongo? My boss person wants me to store them as integers (cents)
[10:56:54] <NodeX> neil__g : there is no practical reason!
[10:56:55] <Derick> rounding issues with floats
[10:57:13] <neil__g> @Derick do you have an example for me?
[10:57:14] <NodeX> (assuming you round to 2dp)
[10:57:56] <rs_> Thanks NodeX
[10:58:08] <Derick> neil__g: there are huge amounts written about it, I think http://www.floating-point-gui.de/ is an excellent read
[10:58:13] <neil__g> thanks
[10:58:41] <NodeX> that's OTT for storing simple prices
[10:59:06] <NodeX> unless you;re considering to map/reduce on said prices at some point I supose
[10:59:09] <NodeX> suppose*
[11:00:58] <neil__g> I understand in theory, but we have a very (small) finite number of sales, and prices only range between 0 and maybe 100000
[11:00:59] <deoxxa> as a rule, never store money as floats
[11:01:08] <deoxxa> ever.
[11:02:13] <rs_> how would you store an exchange rate to 3 decimal places? 1.234
[11:03:35] <rs_> nm.
[11:07:07] <NodeX> I would store it as a float .. I use them all the time and never have any problems
[11:07:18] <Derick> neil__g: interestingly, that website recommands using floats: http://www.floating-point-gui.de/formats/integer/
[11:08:23] <neil__g> @Derick my confusion is this: what good does it do to store a price in cents, and then have to do map/reduce which does division
[11:08:42] <neil__g> so the price is 2500 - great - but then I need a map reduce to get excl. tax, for instance
[11:09:42] <Derick> yeah...
[11:09:56] <Derick> just make sure that if you store calculated floats, you round them
[11:10:06] <neil__g> understood
[11:10:28] <kali> +1 for integer. at least it is predictable and testble
[11:10:33] <NodeX> [11:53:57] <NodeX> (assuming you round to 2dp)
[11:10:41] <NodeX> ^^
[11:11:08] <Derick> yup
[11:12:26] <NodeX> I wonder if the UK government are going to use MongoDB to store all this new "snooping" data they are about to amass from it's citizens
[11:13:01] <NodeX> every phone call, email, website, search all stored for 1 year .. that's alot of data
[11:15:13] <Derick> not their content though
[11:15:22] <Derick> (it's still ridiculous)
[11:15:40] <NodeX> terrible
[11:15:54] <NodeX> they're saying that the police wont even need a warrent to search it
[11:16:22] <NodeX> it's akin to the police just walking in yor house day or night whenever they like
[11:19:14] <Derick> NodeX: yes :-/ The not requiring a warrant is what worries me most
[11:19:29] <NodeX> UK first then the rest of Europe :/
[11:19:29] <Derick> and it has 0 effect on what they're doing it for...
[11:19:39] <Derick> actually, lots of this data is part of a EU directive
[11:19:53] <NodeX> we already have the most CCTV per person in the world (certainly in europe if not)
[11:20:20] <NodeX> it's all part of the New World Order (if you believe in that sort of stuff)
[11:20:29] <NodeX> scary that we can't do anything about it
[11:20:34] <deoxxa> that's why i moved to the moon
[11:20:39] <ryan__> is there anything wrong with storing money values as integers? e.g. dollars become cents.
[11:21:00] <NodeX> ryan__ : store it how you like
[11:21:10] <Derick> ryan__: it's just processing that will be a pain
[11:21:34] <NodeX> your (int) will allways need extra processing if you store as Int's
[11:21:57] <ryan__> I get that I may need to store the position of the decimal place and format the number when it's read out, but is there anything else I need to consider?
[11:22:13] <NodeX> 64bit numbers is all
[11:22:35] <ryan__> sorry, what does that mean?
[11:22:36] <NodeX> but if you're stooring somehting that expensive in your database then I want to be your new friend!!
[11:22:42] <NodeX> storing*
[11:22:49] <ryan__> ah.. really long numbers
[11:22:58] <NodeX> correct
[11:23:17] <ryan__> yea, don't think I'll be doing anything too expensive
[11:23:20] <deoxxa> but it's ok, you can store numbers up to nearly 1,000,000 in a 64 bit number
[11:23:52] <Derick> uh, to nearly a lot more
[11:24:07] <Derick> up to 9223372036854775808 in a signed 64bit int
[11:24:18] <ryan__> was going to say... 1m doesn't seem like a lot
[11:24:27] <deoxxa> lol
[11:24:35] <wereHamster> if you don't want a million, I'll take it!
[11:24:39] <Derick> if you do multiplication, you will need to round down
[11:24:44] <Derick> and/or device
[11:25:38] <Derick> 2495 * 1450 = 3617750, but if you want 24.95 * 14.50, then it's 361.77 only...
[11:26:08] <Derick> or 361.78... so you need to devide by 100 yourself again
[11:26:46] <W0rmDrink> Hi
[11:27:01] <W0rmDrink> If getShardVersion on each shard of a cluster shows different value is this problematic ?
[11:28:21] <ryan__> cheers @Derick, good point
[11:36:17] <ryan__> Why are the drawbacks on this page seen as 'severe' ? http://www.floating-point-gui.de/formats/integer/
[11:38:39] <adamcom> @W0rmDrink - what's the value? (minor doesn't really matter, since it just indicates splits)
[11:39:23] <W0rmDrink> adamcom, when queried via mongos it is: { "version" : { "t" : 1105000, "i" : 5 }, "ok" : 1 }
[11:39:57] <adamcom> right - so if "i" differs, that's fine
[11:40:20] <W0rmDrink> but when queried via shard members it is only that for like 2 shards - for other shard for example - its - { "configServer" : "hxcsvc-a01:27019,hxcsvc-a02:27019,hxcsvc-b01:27019", "global" : { "t" : 1097000, "i" : 0 }, "mine" : { "t" : 0, "i" : 0 }, "ok" : 1 }
[11:41:06] <adamcom> ah, OK, so it probably just hasn't done a migrate/split that caused it to update the local version
[11:41:14] <adamcom> once it does, it should come up to speed
[11:41:29] <adamcom> the shards themselves don't really need to be in sync (the mongos does)
[11:41:36] <W0rmDrink> no - but look at global version - lgobal version does not match
[11:41:54] <adamcom> global version is the last version it saw when it pinged the config server
[11:42:13] <adamcom> trying to think of an easy way to refresh that…….hmmm
[11:43:09] <W0rmDrink> see - my concern is - I keep getting errors like this on and off: Assertion: 13388:[asyncad.pending] shard version not ok in Client::Context: client in sharded mode, but doesn't have version set for this collection: asyncad.pe
[11:43:10] <W0rmDrink> nding myVersion: 1|0
[11:43:59] <W0rmDrink> and when that msg comes up client gets error: { ok: 0.0, errmsg: "Tried 5 times without success to get count for asyncad.pending from all shards" }
[11:44:25] <adamcom> have you tried fluhing the config on the mongos?
[11:44:29] <W0rmDrink> using mongodb 1.8.5
[11:44:49] <adamcom> http://docs.mongodb.org/manual/reference/commands/#flushRouterConfig
[11:44:52] <W0rmDrink> well I did - it seemes to have gotten rid of the error msg - but ehrm - the version missmatch is still there
[11:45:39] <W0rmDrink> or the version missmatch in output of getShardVersion
[11:45:41] <adamcom> right, but the only place the version matters is on the mongos - it only matters for the shard members when they are doing migrations and splits
[11:45:52] <W0rmDrink> ok
[11:46:06] <W0rmDrink> will monitor and see if it happens again
[11:46:26] <adamcom> however, there is a bug with count() for sharded DBs
[11:46:33] <adamcom> let me dig that up
[11:48:46] <adamcom> here's the relevant server issue: https://jira.mongodb.org/browse/SERVER-3645
[11:49:32] <W0rmDrink> hmm, well thats ok - dont really have that problem
[11:51:53] <adamcom> lot of sharding fixes in 2.0.x - might be worth having a look there too
[11:56:29] <W0rmDrink> or I mean if that is problematic it have not bothered me
[12:06:09] <W0rmDrink> ;(
[12:07:29] <W0rmDrink> hmm
[12:07:39] <W0rmDrink> how safe is it to always run with balancer disabled ?
[12:07:48] <W0rmDrink> cos it seems my balancer has been disabled since 05-07
[12:07:57] <W0rmDrink> and I read that then client wont ping config server
[12:08:11] <W0rmDrink> and will this result in errors like this : shard version not ok in Client::Context: client in sharded mode, but doesn't have version set for this collection: ...
[12:08:12] <W0rmDrink> ?
[12:17:13] <adamcom> that can be caused by various things, are you running a map/reduce when you get it? - the only workaround, besides an upgrade, would be to run flushRouterConfig before each M/R kicks off
[12:18:43] <adamcom> some of the relevant pieces: https://jira.mongodb.org/browse/SERVER-4387 https://jira.mongodb.org/browse/SERVER-4262 https://jira.mongodb.org/browse/SERVER-4185 https://jira.mongodb.org/browse/SERVER-4185
[12:19:19] <W0rmDrink> M/R ?
[12:19:28] <adamcom> map reduce
[12:19:38] <W0rmDrink> we dont use map reduce
[12:19:49] <adamcom> so when are you seeing the errors?
[12:19:56] <adamcom> some other long running query?
[12:20:04] <adamcom> a findAndModify perhaps
[12:20:11] <W0rmDrink> well - this time It was after upgrade from 1.8.3 to 1.8.5
[12:20:30] <W0rmDrink> we dont use findAndModify via sharded either - we do some finds which takes long
[12:22:57] <W0rmDrink> once I got it when adding new member to replica set (set was shard in the cluster)
[12:23:27] <W0rmDrink> and once I got it seemlingly randomly (i.e. nothing out of ordinary happened on system)
[12:40:38] <augustl> is a mongodb _id/ObjectID always a hex string by default?
[12:40:58] <Derick> yes
[12:40:59] <augustl> gonna index it in Lucene, seems like a binary field (after un-hexing) would be a sensible choice for the _id
[12:41:03] <augustl> Derick: cool, thanks
[12:41:39] <Derick> augustl: http://www.mongodb.org/display/DOCS/Object+IDs#ObjectIDs-TheBSONObjectIdDatatype
[12:42:18] <augustl> Derick: thanks!
[12:45:11] <devastor> Hi all, I got some "DR102 too much data written uncommitted" warnings and backtraces after initial sync in 2.0.4 when it was doing the replSet initialSyncOplogApplication stuff. It continued and completed ok after those, though. Is there a risk that some data didn't get written properly or anything like that?
[13:17:08] <JoeyJoeJo> I have an array of arrays that I want to store in mongo using pymongo. How can I do that?
[13:17:34] <Derick> just store it like a normal document... mongodb supports nested arrays
[13:18:05] <ron> it supports storing nested arrays. querying and modifying nested arrays is a bit limited though. unfortunately.
[13:18:53] <JoeyJoeJo> ok, thanks
[13:24:42] <NodeX> use php
[13:24:47] <NodeX> :P lololol
[14:29:37] <kristuttle> I am stumped by "call to member function insert() on a non-object" from my PHP code when I try and operate on my mongoDB. I check PHP info and the driver is there, the connection to mongolab seems to work. I am copying the code right from the tutorial as in $obj = array ("name" => "foo"); and then $collection->insert($obj);
[14:30:17] <NodeX> can you pastebin your code?
[14:30:27] <NodeX> you probably have not initialised the $collection
[14:32:42] <kristuttle> Actually I think I may have found the error.
[14:34:29] <kristuttle> Whew. Took me 30 minutes of looking everywhere and it turned out that I had a "dp" in one place instead of "db". Sorry to have taken up IRC oxygen on that one. Back to work...
[14:34:38] <NodeX> lolol
[15:19:36] <FerchoDB> can be possible that map reduce is not using indexes on its "query" field?
[16:01:47] <Vile> guys, is it possible to run 2.0.x/1.8 in replica set?
[16:02:13] <Vile> having 1.8 as master, obviously
[16:08:02] <kali> Vile: iirc, it works in that direction and that direction only
[16:08:28] <kali> Vile: look for 2.0 releases and productions notes
[16:28:24] <adamcom> Vile: also, don't run it that way for a long time - as part of an upgrade process, sure, but not for extended use
[16:48:23] <JoeyJoeJo> I have a field that contains floats. Can I easily find all documents where this field has less than 2 decimal places? ie return if field = 1.1 but not if field = 1.11
[16:48:46] <NodeX> no
[16:48:58] <NodeX> well you can regex it I suppose
[17:10:54] <UnSleep> mmm does necesary to use reduce function when i only want to select a nombre of rows without group them?
[17:12:28] <UnSleep> i only want to select row that are this.a > this.b + this.c
[17:20:29] <UnSleep> if i "emit" only the rows i want will been shown in the result? map is done in each row in the collection?
[17:32:23] <mediocretes> I'm a little concerned about setting op_timeout in the mongo ruby driver. Is there a chance that I could get results from a timed out query? Is that something I need to protect against?
[17:32:25] <mediocretes> https://github.com/mongodb/mongo-ruby-driver/#socket-timeouts
[17:49:38] <dstorrs> the backups docs say that you can back up "a small cluster" via mongos. Can someone suggest good limits for "small" ?
[18:24:38] <niemeyer> dstorrs: I don't think MongoDB itself has a lot to do with those limits in that scenario
[18:25:01] <niemeyer> dstorrs: The limits will be imposed by your time/disk/network/memory constraints
[18:27:23] <dstorrs> niemeyer: fair enough. I'm just trying to get a sense of what to expect.
[18:28:24] <niemeyer> dstorrs: Yeah, sorry, I understand you're trying to get more details for your use case, and I'm not helping much, but the real answer is really "it depends", which sucks a bit as an answer.
[18:28:35] <dstorrs> heh
[18:28:51] <dstorrs> so...while it's in this mode, nothing can get written to the cluster?
[18:28:57] <niemeyer> dstorrs: The scenario is not so special
[18:29:02] <multiHYP> hi
[18:29:11] <dstorrs> so it's really (total DB size) / (throughput from mongos to storage device) ?
[18:29:17] <multiHYP> is it possible to change the name of a field in array elements?
[18:29:22] <multiHYP> elements are anonym.
[18:30:09] <multiHYP> {things: [{a:"", b:1}, {a:"", b:"2"}, {a:"", b:"3"}]}
[18:30:16] <multiHYP> I want to change b's to c's
[18:30:45] <multiHYP> is that possible?
[18:30:54] <niemeyer> dstorrs: That will always be the case no matter what, right?
[18:31:24] <niemeyer> dstorrs: Except DB size is not *entirely* related to the data size, but.. yeah, it's what you think that means :)
[18:32:43] <nemosupremo> @multiHYP: why not just load the array, do the change in your client code, and push an update?
[18:32:49] <niemeyer> dstorrs: Are you doing that to move the data around or something similar?
[18:35:05] <mongouser> so I have some sharding questions
[18:37:57] <multiHYP> nemosupremo: these are already in the array.
[18:37:58] <dstorrs> niemeyer: No, just trying to get it from the cluster into the backup
[18:38:17] <niemeyer> dstorrs: Okay, any reason not to go the usual way with snapshots?
[18:38:26] <multiHYP> oh nemosupremo: not possible
[18:38:33] <niemeyer> dstorrs: It's a lot faster and lighter for the database
[18:38:43] <multiHYP> i cannot push an extra deployment just for renaming my db.
[18:39:08] <niemeyer> multiHYP: I don't recall any way to do that myself, but you can always:
[18:39:15] <niemeyer> 1) Iterate over documents, inserting new field name
[18:39:21] <niemeyer> 2) Re-deploy using new field name
[18:39:27] <niemeyer> 1) Iterate over documents, removing old field name
[18:39:38] <niemeyer> Erm, the last one is 3
[18:39:40] <niemeyer> :)
[18:39:43] <multiHYP> yes, was hoping it was possible vi shell.
[18:40:42] <niemeyer> multiHYP: Even if there is magic within the shell, the weight of the operation is pretty much that same one
[18:41:32] <dstorrs> multiHYP: db.coll.find().forEach(function(d){ d.things.forEach(t){ ... } })
[18:41:37] <multiHYP> so could i load a js file with instructions to do this via mongo shell?
[18:41:46] <multiHYP> exactly
[18:41:51] <multiHYP> cheers dstorrs
[18:41:52] <multiHYP> :)
[18:41:56] <dstorrs> np.
[18:42:33] <niemeyer> multiHYP: The advantage would be in using something native to the database that changed documents atomically, but I don't think such an operator exists
[18:44:13] <dstorrs> well, there's update(), with $push and $pull to add / remove keys
[18:44:23] <dstorrs> but I don't think it will quite fit here
[18:45:11] <dstorrs> I'm not sure it would be valid to update the item you are currently iterating over.
[18:47:45] <multiHYP> no it'd be a 2 pass operation
[18:59:32] <multiHYP> dstorrs: very powerful. and i thought the only automation possible was mapreduce :)
[19:00:00] <multiHYP> why can't i just by words in mongo shell?
[19:00:11] <dstorrs> ?
[19:00:41] <dstorrs> "Pat, I'd like to buy a verb..."
[19:01:41] <multiHYP> like ctrl + arrow keys in terminal that allows you to jump by words instead of by characters to the left or right.
[19:05:46] <dstorrs> ah.
[19:05:49] <multiHYP> oh i need to $pull and $push at least once for every element of the array.
[19:06:06] <multiHYP> cannot just fiddle with fields without pulling them.
[19:07:26] <blazento> Hi, I'm trying to query for items with a key existing, the only problem is some of these keys appear to have spaces and / or dots... {'named_entities.flat.Charlie Sheen' : {$exists:true}}... is there any way to work with these spaces?
[19:08:02] <multiHYP> i wish we could do regex.
[19:08:17] <multiHYP> blazento: maybe that helps?
[19:08:44] <blazento> what helps multiHYP ?
[19:08:46] <multiHYP> regex in javascript
[19:08:51] <blazento> ah
[19:08:52] <multiHYP> never done it though
[19:09:19] <dstorrs> multiHYP: db.coll.find({ foo : { $regex : /bar/ } }) *cough*, *cough*
[19:09:32] <multiHYP> brilliant
[19:09:34] <multiHYP> :D
[19:09:45] <multiHYP> yes i never done in js, not to mention mongodb
[19:09:51] <multiHYP> regex is hard
[19:10:00] <dstorrs> would you take it horribly remiss if I pasted a link to the docs followed by those four little upper-case letters we all know and love so well?
[19:10:07] <dstorrs> let's go shopping!
[19:10:18] <multiHYP> no
[19:10:22] <multiHYP> i know i saw them too
[19:10:37] <multiHYP> but you always go after things you need most.
[19:10:45] <dstorrs> http://docs.mongodb.org/manual/ RTFM
[19:10:52] <multiHYP> regex hasn't been one of them.
[19:11:24] <multiHYP> it was not my question, remember?
[19:11:45] <dmansen> hello all
[19:12:39] <dmansen> i have a question about geo queries - i've noticed that my 2d indexes will only be used if my geo query isn't part of a compound expression (like inside an $and)
[19:12:55] <dmansen> is this a known issue?
[20:12:39] <freezey> hey what disk scheduler for mongo is recommended on ssd?
[20:12:44] <freezey> noop or deadline?
[20:20:32] <W0rmDrink> yeah this is just insanve
[20:20:47] <W0rmDrink> yeah this is just insane - I keep getting Wed Jul 4 01:04:08 [conn2450] Assertion: 13388:[asyncad.pending] shard version not ok in Client::Context: client in sharded mode, but doesn't have version set for this collection: asyncad.pending myVersion: 1|0
[20:21:02] <W0rmDrink> if I flush router config its ok for a short while - then it starts again
[20:27:05] <nemosupremo> Are there any steps I should take before I create a replica set out of an existing server?
[20:29:11] <freezey> make sure mongo is running?
[20:29:24] <freezey> might as well flush the data also if it has stale data
[20:29:33] <freezey> imo
[20:47:22] <tanner> can mongodb be reconfigured live, by that I mean, can I add a server to a replica set without bringing it down and or restarting the service?
[20:48:11] <freezey> yes
[20:48:18] <freezey> you can reconfig replica sets on the fly
[20:48:25] <freezey> however they will lose connectivity for a moment
[20:50:24] <tanner> is there a way I can do that programmatically (without having to use the mongo shell)?
[20:50:41] <freezey> not sure i always just use mongo shell
[20:54:08] <W0rmDrink> ugh I will just make a cronjob to flush router config every minute
[21:13:18] <lsm-lpt> can i run a mongo statement on the command line without entering 'mongo shell'?
[21:13:32] <lsm-lpt> (on a linux box)
[21:14:20] <kali> mongo server/db --eval "blah blah"
[21:15:14] <lsm-lpt> thanks!
[21:20:07] <tanner> kali: can you perform a reconfigure that way as well?
[21:34:58] <kali> tanner: you can dur about everything this way, but some shortcuts are not there... http://www.mongodb.org/display/DOCS/Scripting+the+shell
[22:04:32] <lsm-lpt> can i pipe a query to the mongo command line client on stdin and get query results on standard out?
[22:05:52] <lsm-lpt> mongo server/db -eval'[query]' doesn't seem to support stdin