PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 7th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:05:34] <henrykim> is there anyway to recognize a document to load whether it is page-fault of not? I mean that loading operation is from memory or not.
[00:05:49] <henrykim> page-fault of -> page-fault or
[01:42:12] <multi_io> how does mongodb convert a shard key value to a shard number?
[02:06:09] <freezey> anyway to have a repica set member ignore a node it cant talk to?
[02:07:58] <freezey> nvmd got it
[03:03:57] <freezey> anybody have a good mongodb benchmark tester?
[03:43:28] <deoxxa> freezey: benchmarks are stupid by design
[03:43:38] <freezey> ok fair enough
[03:43:41] <deoxxa> freezey: a benchmark will tell you about the performance of one very small part of a system
[03:43:55] <freezey> fair enough
[03:43:56] <deoxxa> you really need to test our your own data with it
[03:43:59] <freezey> might as well test from application level
[03:44:00] <deoxxa> *out
[03:44:02] <deoxxa> yeah
[03:44:14] <deoxxa> there's a lot of things separate from performance that are important as well
[03:44:24] <deoxxa> things like ease of use, stability, etc
[07:24:10] <carsten> good morning swamp
[07:24:38] <dstorrs> morning
[07:24:54] <carsten> who brought this stupid bot in again?
[07:32:58] <dstorrs> which stupid bot is that?
[07:33:09] <dstorrs> oh, Bilge?
[07:33:17] <carsten> pmxbot
[07:33:22] <dstorrs> ah.
[07:33:26] <carsten> -pmxbot- PRIVACY INFORMATION: LOGGING IS ENABLED!!
[07:33:26] <carsten> -pmxbot- The following channels are logged are being logged to provide a
[07:33:26] <carsten> -pmxbot- convenient, searchable archive of conversation histories:
[07:33:26] <carsten> -pmxbot- #dcpython, #mongodb
[07:33:53] <dstorrs> *shrug* no idea
[07:35:15] <[AD]Turbo> hola
[07:35:21] <dstorrs> hi
[07:40:05] <wereHamster> why do we need yet another logger? This channel is already beeing logged, see the topic
[07:43:18] <carsten> throw this bot out of the door
[07:43:37] <carsten> bot logging is rude without permission to opt out
[08:04:00] <kali> Derick: tychoish : any chance you can kick this bot out ?
[08:04:54] <NodeX> bot ?
[08:05:56] <kali> NodeX: pmxbot
[08:06:26] <NodeX> I didn't notice one!
[09:04:23] <negaduck> hi! how to delete an object attribute? In ruby i've got pr = $db['accounts'].find_one({:uid => uid}). I'd like to delete, for example, pr['email'].
[09:04:30] <wereHamster> $unset
[09:04:35] <wereHamster> google it
[09:04:59] <carsten> read your favorite tutorial
[09:15:39] <SLNP> Are there any outstanding bugs to do with the PHP driver and connecting to replica sets?
[09:15:53] <SLNP> We have been seeing some strange behavior
[09:16:30] <carsten> i assume you don't want to share your errors?
[09:16:35] <ranman> Is it passing all tests if you build it from source?
[09:16:41] <SLNP> I will, one second
[09:17:07] <ranman> I think there are a couple replset tests in there
[09:18:00] <bjori> ranman: there shouldn't be any problem - there are couple of tickets about when reading from secondaries.. but we haven't been able to reproduce it
[09:18:07] <bjori> arg
[09:18:10] <bjori> SLNP: ^^ :)
[09:18:33] <SLNP> basically we have a test replica set running on 2 machines
[09:19:03] <SLNP> we are unable to connect to the set we get: PHP Fatal error: Uncaught exception 'MongoCursorException' with message 'couldn't determine master' in /root/mongotest.php:7
[09:19:30] <SLNP> we have tried 2 different php versions and 3 driver versions
[09:19:44] <bjori> SLNP: that happens when the driver cannot connect to your primary
[09:20:01] <SLNP> we can connect if we don't connect as a replica set
[09:20:08] <SLNP> if we connect to just a single host it's fine
[09:20:21] <SLNP> also when I use the python driver to connect to the rs it works fine
[09:21:47] <SLNP> test results: =====================================================================
[09:21:47] <SLNP> TEST RESULT SUMMARY
[09:21:48] <SLNP> ---------------------------------------------------------------------
[09:21:48] <SLNP> Exts skipped : 0
[09:21:48] <SLNP> Exts tested : 62
[09:21:48] <SLNP> ---------------------------------------------------------------------
[09:21:48] <SLNP> Number of tests : 56 56
[09:21:51] <ranman> SLNP: I have that same test failing, let me see if I can figure out what is going on
[09:21:54] <bjori> SLNP: can you paste your connection string?
[09:21:58] <SLNP> Tests skipped : 0 ( 0.0%) --------
[09:21:59] <SLNP> Tests warned : 0 ( 0.0%) ( 0.0%)
[09:21:59] <SLNP> Tests failed : 10 ( 17.9%) ( 17.9%)
[09:22:00] <SLNP> Expected fail : 0 ( 0.0%) ( 0.0%)
[09:22:01] <SLNP> Tests passed : 46 ( 82.1%) ( 82.1%)
[09:22:02] <SLNP> ---------------------------------------------------------------------
[09:22:03] <SLNP> Time taken : 176 seconds
[09:22:13] <SLNP> =====================================================================
[09:22:14] <SLNP> =====================================================================
[09:22:15] <SLNP> FAILED TEST SUMMARY
[09:22:15] <SLNP> ---------------------------------------------------------------------
[09:22:16] <SLNP> Test for bug PHP-307: getHosts() turns wrong results. [tests/bug00307.phpt]
[09:22:17] <SLNP> Test for bug PHP-320: GridFS transaction issues with storeBytes(). [tests/bug00320-2.phpt]
[09:22:19] <abrkn> slow clap
[09:22:19] <bjori> yes yes. we get it!
[09:22:27] <SLNP> Test for bug PHP-320: GridFS transaction issues with storeFile(). [tests/bug00320.phpt]
[09:22:28] <SLNP> Test for bug PHP-343: Segfault when adding a file to GridFS (storeFile) [tests/bug00343-2.phpt]
[09:22:29] <SLNP> Test for PHP-384: Segfaults with GridFS and long_as_object. [tests/bug00384.phpt]
[09:22:30] <SLNP> Connection strings: toString. [tests/connection-tostring.phpt]
[09:22:31] <bjori> SLNP: please you a pastebin
[09:22:40] <SLNP> Connection strings: unsuccesfull authentication [tests/connection-with-auth-failure.phpt]
[09:22:41] <SLNP> Connection strings: succesfull authentication [tests/connection-with-auth-successfull.phpt]
[09:22:42] <SLNP> Test for database timeout option. [tests/mongo-timeout.phpt]
[09:22:43] <SLNP> Server: list databases [tests/server-list-databases.phpt]
[09:22:43] <SLNP> =====================================================================
[09:22:55] <SLNP> $d = new Mongo("mongodb://10.1.1.1:27017,10.132.111.43:27017", array('replicaset' => 'test1'));
[09:22:56] <SLNP> apologies
[09:23:15] <SLNP> there is a spelling error there
[09:23:24] <SLNP> it's meant to be replicaSet
[09:23:57] <carsten> could you please stop flooding?
[09:24:04] <carsten> *ignore*
[09:26:42] <bjori> I assume one of those is your primary?
[09:27:14] <SLNP> Yes
[09:27:24] <SLNP> I changed the ips around
[09:30:08] <SLNP> We tessted on centos6 and os x as well
[09:31:28] <bjori> SLNP: which pecl/mongo version are you using exactly?
[09:32:28] <SLNP> Built from source versions 1.2.10, 1.2.7, 1.3 dev
[09:35:26] <bjori> SLNP: I can't say I have any idea why that is happening for you
[09:35:57] <bjori> SLNP: I'm wondering though, you said you can connect if you don't connect as a replicaset? what do you mean by that?
[09:36:20] <SLNP> if I connect to a single host
[09:36:27] <SLNP> the driver is fine
[09:36:33] <bjori> the same host?
[09:36:37] <SLNP> yes
[09:37:00] <bjori> are you sure you sure the servers are actually running as replicaset? o.O
[09:37:04] <SLNP> but adding the replicaset option causes the problem
[09:37:07] <SLNP> yes they are
[09:37:14] <SLNP> rs.status() reports fine
[09:39:01] <SLNP> We ran monosniff, and tested. When the replicaSet option is defined on the client we cannot see anything from the sniff output coming from the client
[09:40:48] <bjori> SLNP: please file a bug report, and include all the data you can (your script, mongod configs, rs.status()...)
[09:41:05] <SLNP> sure, thank you
[09:42:18] <SLNP> again apologies for the spam, I didn't realise my client was going to create a new message for every line. :(
[09:43:35] <ranman> well at least now you know :p
[09:43:46] <SLNP> I do now yes :)
[09:49:35] <akuwano> 乗り遅れ感すごいですがぼくもたごもりさんブログ読みますた、めっちゃ同意です!!
[09:53:06] <kali> this is spam, right ?
[09:54:14] <carsten> or a numpty fallen with the head on the keyboard
[09:56:02] <ranman> it's about a blog
[09:56:12] <ranman> I can read japanese because I learned ruby
[10:00:31] <SLNP> Ok we are a little further with the issue
[10:07:56] <SLNP> The replica set is set up using host names not IP addresses, so when we use the hostnames in the connection string it works. However this is not the case with the Python driver, with the Python driver you can pass an IP address and it's fine.
[10:43:25] <bjori> SLNP: ahh! that makes sense. the php driver uses the information retrieved from the replicaset, and if you added the servers with hostnames your machine needs to be able tor esolve it
[10:43:42] <SLNP> Yep :)
[10:44:04] <SLNP> Thank you for your time :)
[11:59:46] <abrkn> cant find the mongodb ami in marketplace... is there a new one?
[12:00:44] <NodeX> ami ?
[12:00:58] <kali> amazon image, i guess
[12:00:59] <abrkn> ec2 image http://www.mongodb.org/display/DOCS/AWS+Marketplace#AWSMarketplace-MongoDBAMI
[12:01:19] <abrkn> mongolab crashed for the 745389748592349th time today so i'll just have to set something up myself
[12:48:32] <kanzie> how can I add a new field to each row in my collection
[12:48:58] <kanzie> like collection users, I want to add field active = 1 to all existing entries in the collection
[12:50:07] <kali> kanzie: db.users.update({}, { $set: { active:1 }}, false, true)
[12:50:10] <kanzie> thanks
[12:53:37] <kanzie> hmm, if I have a collection in a collection then… didn't quite work. I have users and in users I have a collection called privs that has entries. I want to do something like db.users.privs.update({}, {$set:{'active':1}}, false, true)
[12:54:41] <algernon> kanzie: db.users.update({}, {$set: { "privs.active": 1}}, false, true)
[12:55:57] <kanzie> can't append to array using string field name [active]
[12:56:11] <kanzie> is that because the field don't exist? Shouldn't $set create if not exist
[12:58:15] <algernon> are you sure it does not exist?
[12:59:26] <kanzie> yep
[13:01:11] <algernon> https://www.refheap.com/paste/3044 - works for me.
[13:02:16] <kanzie> it seems the problem is that Im truing to create a element in a nested collection
[13:02:40] <kanzie> Im not doing foo.active but foo.bar.active
[13:04:34] <algernon> that should work similarly.
[13:04:45] <kanzie> it dont
[13:04:47] <kanzie> :-(
[13:04:51] <kanzie> don't get it to work
[13:05:23] <algernon> https://www.refheap.com/paste/3045 - works for me.
[13:05:38] <algernon> there's probably something in your collection that borks it.
[13:06:23] <kanzie> yeah, can't do db.users.pickups.find() either
[13:06:32] <kanzie> give me just new prompt
[13:06:52] <kanzie> but I can retrieve a user object and from there work with privs
[13:06:58] <kanzie> s/pickups/privs
[13:08:46] <algernon> do you really have a collection named "users.pickups" ?
[13:09:09] <algernon> or rather a users collection, where each document also has a pickups document?
[13:09:54] <kanzie> I have users collection
[13:10:00] <kanzie> where privs is embedded
[13:10:14] <algernon> and what's the command you tried, exactly?
[13:10:34] <algernon> and how does a single document in users look like?
[13:10:38] <kanzie> db.users.privs.update({},{"active":1},false, true)
[13:11:51] <kanzie> so, db.users.find(pickups: 'foo') works
[13:12:03] <kanzie> so have to I update a embedded collection
[13:12:07] <NodeX> privs is an embeded array
[13:12:10] <NodeX> privs is an embeded array ?
[13:12:14] <kanzie> yes
[13:12:23] <NodeX> can you pastebin a typical document
[13:12:25] <algernon> then, like I said above
[13:12:28] <algernon> 14:52 <algernon> kanzie: db.users.update({}, {$set: { "privs.active": 1}}, false, true)
[13:12:30] <NodeX> db.users.findOne();
[13:12:37] <NodeX> ^^ should work no problems
[13:13:17] <algernon> db.users.privs does not exist, privs is an embedded property of documents inside users. so you need to db.users.update().
[13:13:17] <kanzie> NodeX algernon: that gives me can't append to array using string field name [active]
[13:14:08] <algernon> aha!
[13:14:19] <algernon> ok.
[13:14:39] <algernon> so your documents look like {"privs": ["blah", "blah", "blah"]}
[13:14:43] <kanzie> db.users.privs exists and have entries
[13:15:05] <algernon> back to work I guess.
[13:15:26] <NodeX> if it's an array then you need to do other things
[13:15:44] <kanzie> it is, embedded array
[13:15:45] <NodeX> in your case you would have to "privs.0" : 1
[13:15:52] <NodeX> for the first blah and so on
[13:16:10] <NodeX> that is if your document looks like algernon suggests
[13:16:24] <NodeX> else pastebin a document (minus any sensitive information_
[13:19:06] <kanzie> NodeX: http://pastebin.com/S4Wiu9Vi
[13:21:20] <NodeX> and which part do you want to update
[13:21:25] <NodeX> active correct?
[13:22:02] <kanzie> yep
[13:22:24] <kanzie> just want to not have to delete and build the collection from scratch or manually have to update each document
[13:22:28] <NodeX> is privs likely to have more than 1 array
[13:22:42] <NodeX> it looks like you have made a slight mistake in schema design else
[13:23:25] <kanzie> NodeX: yes, privs can have several items, each has a unique date and active-flag
[13:23:31] <NodeX> right ok
[13:23:37] <NodeX> and it's just the first one or all of them ?
[13:23:40] <kanzie> I just stripped them for sake of clarity
[13:23:47] <kanzie> I want to update all of them
[13:23:58] <NodeX> is there a set number or does it vary
[13:24:03] <kanzie> vary
[13:24:18] <NodeX> do they have anything common ?
[13:24:35] <kanzie> they have the same fields.
[13:24:37] <NodeX> that can be $exists
[13:24:47] <NodeX> I think you will have to do X updates tbh
[13:24:58] <NodeX> X being the largest number of members
[13:25:03] <kanzie> sure
[13:25:18] <NodeX> it will look like this .... "privs.0.active":1
[13:25:29] <kanzie> I mean if it gets to tricky Id rather just write a command to clear privs on each item in users and I can populate it again
[13:25:36] <NodeX> 0 being the firts memeber ... then "privs.1.active":1
[13:25:45] <NodeX> firts -> first
[13:26:07] <NodeX> you can probably hack somehting using $ and dot notation but it's prolly quicker to do it this way
[13:26:26] <kanzie> so db.users.update({}, {privs.0.active: 1}, true, false)
[13:26:58] <NodeX> yes but quote the dots
[13:27:02] <kanzie> yeah
[13:27:05] <kanzie> Ill test it
[13:27:09] <NodeX> back up your collection first
[13:27:12] <NodeX> fgs !
[13:27:51] <NodeX> infact wait 1 sec...
[13:27:55] <kanzie> Thu Jun 7 15:24:26 uncaught exception: can't have . in field names [privs.0.active]
[13:28:09] <NodeX> db.users.update({}, {$set : { "privs.0.active": 1}}, true, false)
[13:28:18] <NodeX> that's why I said quote the dots!!
[13:28:33] <kanzie> I thought I did
[13:28:37] <kanzie> this is what I ran
[13:28:38] <kanzie> db.users.update({}, {"pickups.0.active": 1},true,false)
[13:28:43] <kanzie> forgot the set
[13:29:08] <kanzie> and pickups == privs
[13:29:50] <kanzie> that worked… it turned 1 into 1.0000, should I use true/false instead
[13:31:08] <NodeX> I think it stores true as true but I can't remember
[13:31:11] <NodeX> try it and see
[13:31:23] <kanzie> saves it as YES
[13:31:31] <NodeX> are you using the shell ?
[13:31:38] <NodeX> or some driver / admin package
[13:31:50] <kanzie> shell for updating and checking in mongohub
[13:32:02] <NodeX> redo it with a 1
[13:32:11] <NodeX> it's fine ... mongohub will be reporting it wrong
[13:32:25] <kanzie> but boolean is fine so Im good with true/false or YES/NO
[13:32:36] <NodeX> it's w/e you want
[13:32:45] <NodeX> true/false is larger than 1/0
[13:33:07] <NodeX> I am not sure if they are indexed that way or not
[13:35:11] <kanzie> how do I best clear privs completely then?
[13:35:20] <kanzie> db.users.privs.removeAll()
[13:35:29] <kanzie> is not 100% correct
[13:38:32] <NodeX> just the privs array?
[13:38:38] <NodeX> you can set it to []
[13:40:02] <NodeX> anyone know if 2 $inc's can be called on the same upsert ... (different fields of course)
[13:42:19] <NodeX> it's ok... it was the PHP driver spazzing out
[13:42:49] <NodeX> apparently calling $inc : { foo : 1}, $inc : {bar:1} is not allowed!
[13:46:03] <abrkn> duplicate key
[13:46:11] <abrkn> $inc: { foo: 1, bar: 1 }
[13:47:13] <NodeX> I know ;)
[13:47:30] <NodeX> hence the "it's ok..." part
[13:49:23] <edvorkin> Hi. I am new to MongoDB, just get installed one on EC2 from amazon Marketplace. what is the correct way to secure it? I see only port 22 open
[13:51:06] <ifesdjeen> edvorkin: limit interfaces you listen to, add authentication, disallow foreign connections
[14:38:02] <edvorkin> security question - in sharded configuration, should I add authentication only to mongos or to all members of the cluster?
[14:44:35] <NodeX> looks like last.fm has been hacked alongside linked in
[15:02:53] <SLNP> is there a news article about it?
[15:42:51] <railsraider> i see a lot of killcursors: found 1 of 1 and mongo loosing connections, any idea why?
[15:43:50] <NodeX> SLNP : no, I just got an email
[15:44:04] <SLNP> ah ok
[15:44:08] <SLNP> no good
[15:44:18] <NodeX> the email was cc'd to linked in
[15:44:52] <SLNP> Was the email from last.fm?
[15:45:06] <NodeX> nope
[15:46:45] <SLNP> ok
[15:52:53] <m4nti5> Good morning
[15:55:07] <m4nti5> I've been playing arround with mongodb a few weeks now using the mongo-C driver, BUT I was using an old version (v0.4), I downloaded from git the 0.6 version and saw that there is a new parameter in several functions, I've been trying to find out how that works, but I think I don't get it yet, can you recommend me some documentation that describes the behavior of that param?
[15:58:52] <SLNP> What is the param called?
[15:59:09] <GRiD> m4nti5, possibly http://api.mongodb.org/c/current/write_concern.html
[16:22:19] <m4nti5> SLNP: write_concern
[16:22:39] <m4nti5> GRiD: thanks, that's exactly what I was looking for :)
[16:22:58] <GRiD> m4nti5, np
[17:55:23] <carsten> fuck - this smelly pmxbot is again here
[17:55:46] <carsten> -pmxbot- PRIVACY INFORMATION: LOGGING IS ENABLED!!
[17:55:46] <carsten> -pmxbot- The following channels are logged are being logged to provide a
[17:55:46] <carsten> -pmxbot- convenient, searchable archive of conversation histories:
[17:55:46] <carsten> -pmxbot- #dcpython, #mongodb
[17:55:55] <carsten> where is the option to opt out logging?
[17:57:02] <crudson> It's more annoying to repost that channel's contents here; I already have all that in another tab.
[17:57:40] <carsten> this is a privacy issue
[17:57:51] <carsten> logging all traffic without explict permission or option to opt out logging is rude
[18:25:50] <mbarr> Hello all.. trying to determine the impact of a large /dev/shm on Mongo's use of memory - RHEL 6.2 x86_64 linux...
[18:26:50] <mbarr> I don't believe it's going to use the memory, but i'm just checking to make sure it's not able to shrink it automatically, or use it in another fashion.
[18:29:03] <tomlikestorock> how do I query on a key's value that is in a list of embedded documents?
[18:50:54] <fkefer> Hi all; I tried installing mongod-gen10 server in fedora17, but selinux insists on not allowing it
[19:02:33] <m4nti5> tomlikestorock: please be more especific with what you want, perhaps use paste bin to paste an (idented) example of the document and say
[19:14:46] <m4nti5> is it possible to set a mongo_write_concern in a mongo_run_command function in the mongo C driver?
[19:20:50] <ranman> m4nti5: http://api.mongodb.org/c/current/write_concern.html?highlight=write%20concern
[19:24:53] <m4nti5> ranman: thanks for the link, I've read it already, and didn't say anything about a way to use it for the mongo_run_command function, I also read http://www.mongodb.org/display/DOCS/getLastError+Command, my question is actually can I do a getlasterror all along with the mongo_run_command
[19:25:25] <ranman> m4nti5: on that one I'm not sure sorry :(
[19:25:27] <m4nti5> I can do 2 run_command, but I would like to save the second call
[19:26:14] <m4nti5> the thing is, I'm using run_command to do an autoincrement with findAndModify
[19:26:35] <m4nti5> and it's really important that I find out if the operation succedded
[19:28:29] <pneff> Hey I need some help with a query. Im running an or query that contains two different fields and is sorted by a date field. When I run each part of the or segment individually or without the sort everything runs fine but when i try to run the entire thing as one query it stalls out. I'm assuming this has something to do with the indexing. Each field that is searched or sorted has an index but not a composite index. Here's my query : db.cdr_record
[19:28:29] <pneff> s.find({$or:[{called_num:"7033483543"}, {trans1_num:"7033483543"}]}).sort({start_time:-1}) Any reason why this would just stall out?
[19:29:16] <carsten> m4: why don't you just call getLastError() after each command?
[19:38:30] <carsten> then not
[19:53:50] <harrymoore> pneff: not sure this helps but mongo will use at most 1 index for a query. so you may need a composite
[19:54:49] <mattbillenstein> hi all
[19:55:01] <mattbillenstein> anyone know of a good way to dump/restore a database over a pipe — like over ssh?
[19:55:13] <mattbillenstein> you can do this with a collection using mongodump/mongorestore
[19:55:27] <harrymoore> pneff: use .explain() on your query to see what index it is using
[19:55:41] <pneff> but I'm only selecting on one since its an or query. http://www.mongodb.org/display/DOCS/Indexing+Advice+and+FAQ
[19:56:01] <pneff> harrymoore: tried that but it just sits there and never outputs anything.
[19:59:14] <rudolfrck> anyone here using mongo_mapper with rails 3.2? I'm looking for some help on a small issue...
[20:04:35] <harrymoore> pneff: did you try without the .sort()
[20:05:23] <pneff> harrymoore: yeah works with out the sort
[20:06:32] <pneff> harrymoore: but the query works just fine if its only a single field with the sort which is kind of annoying
[20:09:16] <harrymoore> pneff: try a composite index which also includes the field you are sorting on. try .hint() to force the query to use that index
[20:11:24] <pneff> harrymoore: k ill see what that does.
[20:25:38] <mateodelnorte> Anyone have pointers for debugging why a node app would "crash" sans error message or stack trace while interacting w/ mongod? The only clue I have is seeing "SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:51434]" in my mongod logs. Using node-mongodb-native.
[20:44:35] <m4nti5> carten: I didn't want to use it, but that's what I did in the end, thanks
[20:45:10] <m4nti5> is there a way to change a bson_oid_t in after it's finished?
[20:45:24] <m4nti5> I'm handling a case of duplicate entries
[20:49:12] <dstorrs> m4nti5: what?
[20:49:20] <dstorrs> didn't follow that.
[20:49:35] <m4nti5> dstorrs: sorry, I'll try to explain myself better
[20:49:37] <dstorrs> you have two records with the same ObjectID value and you want to change one of them?
[20:50:56] <dstorrs> does anyone have experience using Mongo as a backing store / intermediation point for a distributed messaging service (a la RabbitMQ, Celery, etc) ?
[20:51:36] <m4nti5> no, I'm using the mongo C driver, and I create a bson in the client side with an ID (a regular bson_oid_t), the thing is, there is a chance that the ID I'm generating it's already saved in the server (because all ID's I'm generating are client-side IDs)
[20:52:54] <dstorrs> I haven't dealt with the C driver at all, sorry
[20:53:12] <m4nti5> so that will generate a duplicate entry error, the thing is, the bson migth be too big so copying it or regenerating can be very heavy, so, I was wondering if somebody knows a way to re-generate the bson_oid without creating the bson again
[20:53:20] <m4nti5> dstorrs:np
[20:54:09] <jmorris> would anyone know why the output of Model.find(...) in mongoose would not agree with what i see from the command line mongo : db.model.find() ?
[21:09:16] <wb_> when storing images on s3 or gs and managing meta-data in mongo, do people typically create a separate collection of image meta-data? or embed the meta-data inside the documents they are associated with
[21:10:14] <doug> is there a synchronous find() call for node.js?
[21:10:15] <wb_> it seems like it would be easier to just have a separate collection for image meta data, rather than constantly reaching inside documents.
[21:11:20] <dstorrs> wb_: if the metadata is finite and not constantly growing, I would probably embed it. but either way works
[21:12:07] <dstorrs> in general NoRel DBs (a term I prefer over 'NoSQL' -- more accurate) reward you for having a rich data model and putting it all in one document
[21:13:49] <wb_> thanks dstorrs. I think I will try just sticking it in one document, and hope for the best
[21:14:31] <doug> hm, looks like "no"
[21:34:41] <themoebius> I'm getting these messages on my mongos server. ChunkManager: time to load chunks for pb3.hourly_stats: 2990ms sequenceNumber: 1425 version: 38915|169
[21:34:44] <themoebius> what does it mean?
[21:35:16] <themoebius> I've manually moved a few chunks under high load i wonder if it has anything to do with it
[22:55:35] <richardraseley> Question: Can anyone point me towards resources that talk about the multi-datacenter capabilities of MongoDB?
[22:56:40] <richardraseley> I am primarily interested in finding out about how we can read from and write to the same data from multiple locations - depending on optimistic replication to reconcile the differences over time.
[22:58:36] <richardraseley> Also, does MongoDB provide the capability to scale the active capability of a particular shard. It seems (based on my limited understanding) that only one server in a replica set will ever be able to handle both reads and writes, and therefore you are limited in shared performance for writes to that of a single server. Is the solution to this simply creating more shards across which you will spread the entire data set?
[23:04:19] <m4nti5> It looks like there is a leak in the mongo C driver
[23:05:06] <m4nti5> when you're using the write_concern attr, it internally calls to mongo_check_last_error, and there the response bson is not destroyed
[23:05:49] <dstorrs> richardraseley: http://docs.mongodb.org/manual/administration/replication-architectures/ Look for "Geographically Distributed Sets"
[23:05:55] <m4nti5> where can I report that?
[23:06:07] <m4nti5> I have a valgrind trace too
[23:06:13] <dstorrs> m4nti5: use the Jira link in the topic
[23:10:07] <richardraseley> dstorrs: thank you
[23:10:14] <dstorrs> np
[23:11:17] <m4nti5> dstorrs: hummm I'm kind of lost here, it looks like that link is for server reports
[23:11:19] <richardraseley> dstorrs: In that scenario, a second site can always only provide reads / DR.
[23:11:35] <m4nti5> dstorrs: what I found was a C client driver issue
[23:11:44] <richardraseley> dstorrs: There can never be two active (providing writes and reads) sites such as you could have with Cassandra, correct?
[23:12:30] <m4nti5> dstorrs: oh never mind, I found it
[23:13:06] <dstorrs> richardraseley: I don't see why not
[23:13:15] <dstorrs> network latency would suck, but you could do it.
[23:25:12] <richardraseley> dstorrs: How so? The documentation seems clear that you can only have one master in a replica set... or am I not understanding that?
[23:26:04] <richardraseley> So, even if I have 3 shards, two of which have masters in site A and one in site B, I can still only fully access data (on whatever shard it lived) from one site at a time.
[23:27:44] <dstorrs> you said "there can never be two active" -- I understood that to mean "servers". which is perfectly fine if they are masters of separate shards.
[23:28:49] <dstorrs> as to doing multi-master replication throughout a replica set -- I believe Mongo may have supported something like that at one point, but they migrated away.
[23:28:59] <richardraseley> Sorry, that was not my intent. To put it differently, one can never have read / write access to the same chunk from two different physical sites and expect those operations to be eventually consistent.
[23:29:07] <dstorrs> maybe it was just regular master / slave I didn't care enough to research it since it wasn't relevant.
[23:29:18] <richardraseley> I see
[23:29:48] <dstorrs> you should talk to someone with more experience than me.
[23:30:16] <richardraseley> Heh, that is why I am in this channel. =] I appreciate your help though.
[23:30:23] <dstorrs> I'm able to answer most of the questions in here because I've actually RTFM'd a lot, but I haven't had the chance to do replication yet, so it's just book learning
[23:36:45] <multi_io> how does mongodb convert a shard key value to a shard number?