PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 19th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:34:45] <personal> Hey guys, I was wondering if anyone here could potentially help me find an alternative for a SQL query -- I have no clue how to do this on Mongo.
[03:04:31] <mattbillenstein1> personal: I'll bite, what is the query?
[04:01:09] <richardraseley> Just to clarify my understanding on Mongo - a collection contains one or more documents and those documents contain key, value pairs called "field" and "value" respectively?
[04:01:34] <richardraseley> There isn't a key value pair within a field - that wouldn't make sense.
[04:04:08] <mrpro> ?
[04:04:12] <mrpro> element has key/value
[04:04:22] <mrpro> value can be another document
[04:06:45] <richardraseley> mrpro: I am trying to visualize what you are saying. If I were storing a list of employees - name : john smith, address : 123 any street, etc.
[04:07:14] <richardraseley> If the key is "name" and the value is "john smith" - where does the "element" come into play?
[04:07:46] <richardraseley> mrpro: Or are you just talking more generally about an element of an array?
[04:07:50] <mrpro> element is key : value
[04:08:00] <mrpro> document has elements
[04:08:36] <richardraseley> OK, so do the elements themselves have some sort of identifer?
[04:08:40] <richardraseley> A UUID or something?
[04:08:51] <mrpro> element name is "name"
[04:08:53] <mrpro> heh
[04:09:11] <richardraseley> mrpro: Hmph
[04:09:24] <richardraseley> So, the element name would contain name: john smith
[04:09:37] <mrpro> element is a key/value pair
[04:09:46] <mrpro> key is name
[04:09:48] <mrpro> the name of the field
[04:10:17] <richardraseley> So, the element is just the name for the key : value group - the element isn't something that has a structure of its own.
[04:10:35] <mrpro> not sure what you mean
[04:10:44] <mrpro> look at some json/bson samples
[04:10:46] <mrpro> you'll understand
[04:11:05] <richardraseley> I mean - a document contains key1:value1, key2:value2, etc.
[04:11:14] <mrpro> yea
[04:11:16] <richardraseley> And those are referred to as elements.
[04:11:19] <mrpro> yea
[04:11:57] <richardraseley> vs. an "element" being an actual identifier distinct from the key:value pair it contains.
[04:12:00] <richardraseley> OK
[04:12:14] <mrpro> just
[04:12:19] <mrpro> a doc is made up of key/value pairs
[04:12:24] <mrpro> and values can be other documents or arrays
[04:12:29] <stefancrs> morning
[04:12:32] <richardraseley> Sure
[04:12:34] <richardraseley> Hello
[04:12:40] <mrpro> arrays can contain other values
[04:12:46] <richardraseley> OK, that makes sense.
[04:12:58] <mrpro> which can be docs
[04:13:04] <richardraseley> I think I was thinking of it in terms of (for example) a row containing a single column in cassandra
[04:13:15] <stefancrs> how do I change a specific field in all documents in a collection? I need to change all "punchIn" : 2 to 0 :)
[04:13:15] <richardraseley> There is the row key (which has a value) and then the column (which is a key value pair).
[04:13:41] <richardraseley> mrpro: So, I get what you are saying now - thanks for clarifying.
[04:15:45] <advorak> stefancrs, http://www.mongodb.org/display/DOCS/Updating ; db.collection_name.update( criteria, objNew, upsert, multi )
[04:15:56] <stefancrs> advorak: so an upsert then?
[04:16:21] <advorak> stefancrs, no, multi
[04:16:37] <stefancrs> advorak: won't that overwrite the entire documents?
[04:17:35] <advorak> stefancrs, upset will create a document if the matching criteria """Upsert means "update the document if present; insert (a single document) if missing"."""
[04:17:50] <stefancrs> ah, true
[04:17:54] <advorak> update will update all documents matching your criteria
[04:17:55] <stefancrs> I'm tired, I'm sorry :)
[04:18:00] <advorak> it's okay. :-)
[04:18:08] <stefancrs> it's the "objNew" that confuses me
[04:18:55] <advorak> stefancrs, actually sorry .. you want to use one of the modifier operations ..
[04:19:04] <advorak> $something: ...
[04:19:07] <stefancrs> advorak: the $set, right?
[04:19:12] <advorak> yes.
[04:20:00] <stefancrs> so, eh
[04:20:39] <stefancrs> in my specific case: db.videos.update(true, {$set : {"punchIn" : 0}}, false, true); ?
[04:20:50] <stefancrs> that should set punchIn to 0 on all video documents?
[04:21:02] <advorak> db.vehicles.update( {category: "Cars"} , { $set: { discount: 0.25 } } , false, true)
[04:21:23] <advorak> will update the vehicles collection matching the category Cars and set the discount field in the result to be 0.25 (multi is true)
[04:21:34] <advorak> http://www.mongodb.org/display/DOCS/Atomic+Operations#AtomicOperations-FindandModify%28orRemove%29
[04:22:06] <stefancrs> advorak: and if one want to match all documents, having "true" as a criteria works?
[04:22:24] <advorak> replace "Cars" with true
[04:22:40] <stefancrs> category : true? :) seems wrong...
[04:22:43] <advorak> I just saw your query ..
[04:23:18] <stefancrs> and?
[04:23:26] <advorak> stefancrs, are you wanting it to update if the field merely if it EXISTS in a document? or do you want to update it if the value of x is true?
[04:24:02] <stefancrs> advorak: the field exists in all documents in the collection. I want to set its value to 0 on all documents.
[04:24:16] <stefancrs> advorak: so "no criteria" so to speak
[04:24:22] <advorak> db.videos.update( { }, {$set : {punchIn: 0}), false, true)
[04:24:29] <stefancrs> aha
[04:24:39] <stefancrs> just as with find... gotcha.
[04:24:41] <advorak> yes.
[04:24:52] <advorak> Whatever find takes, so will update in the first, 'criteria', option.
[04:25:13] <stefancrs> got it!
[04:28:36] <stefancrs> works a treat, thanks
[05:29:24] <omid8bimo> im trying to run repair on my database but i get this error: "errmsg" : "clone failed for kookoja with error: query failed kookoja.system.namespaces"
[05:29:40] <omid8bimo> any ideas why? (this server is secondary in a replicaSet)
[06:39:20] <omid8bimo> im trying to run repair on my database but i get this error: "errmsg" : "clone failed for kookoja with error: query failed kookoja.system.namespaces"
[06:39:24] <omid8bimo> any ideas why? (this server is secondary in a replicaSet)
[06:39:29] <Guest42665> we can read this
[06:39:50] <Guest42665> why don't you use the mailinglist instead of waiting here for hours for a response?
[06:40:59] <omid8bimo> i prefer live conversation. but probably knows the issue and can respond here faster
[06:47:32] <Guest42665> yes, you can wait for hours and days or get a more fast response on the list
[06:47:42] <Guest42665> it's night in US, it's early morning in europe
[06:49:16] <justdit> hi, I've a problem with mongo everytime I restart my os. I get Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84
[06:49:17] <justdit> exception: connect failed
[06:49:21] <justdit> when I launch mongo
[06:49:36] <justdit> I know how to repair it, but is there a way to avoid that?
[06:51:57] <Guest42665> justdit: because you mongod is obviously not started during the boot phase...
[07:02:16] <justdit> Guest42665: starting manually doesn't help, I call --repair each time after rebooting
[07:04:31] <Guest42665> why do you call --repair? don't you use journaling?
[07:27:58] <[AD]Turbo> yo
[08:21:08] <skot> Guest42665: you should not call —repair without a good reason. It will be time consuming to copy all the database files and rebuild the indexes.
[09:29:07] <gigo1980> hi all, i make an remove shard… now i have in my status "draining": true
[09:29:22] <gigo1980> how long does this take… the shard was only 400 MB
[09:30:00] <gigo1980> and is there an option to move non sharded collections databases ?
[09:59:48] <guest_34672> Hello. I have a question regarding the java driver. I have a class "MyClass" that extends BasicDBObject. If I call "(MyClass) collection.find().limit(1).next()" everything works fine. It is correctly casted to MyClass. Now I want to use findAndRemove. When I call "(MyClass)collection.findAndRemove(new BasicDBObject())" i get the following Exception: java.lang.ClassCastException: com.mongodb.BasicDBObject cannot be cast to org.example
[10:01:32] <skot> yeah, that doesn't work with commands
[10:14:37] <gigo1980> is it posiblem that i can set an disk limit for an database ? inside mongodb
[10:15:14] <skot> not really, there is a thing called quotaFiles which limit the number of files create for any database (each database)
[10:15:58] <skot> http://stackoverflow.com/questions/9779923/set-mongodb-database-quota-size
[10:17:15] <gigo1980> at the moment i have in my shard and also replication the following naming convention "mongo01" mongo02… now i will set the full quallified domain name for the nodes, how can i do this on production ?
[10:18:40] <gigo1980> can i configure "smallfile" in my mongoconfiguration ?
[10:30:10] <Guest42665> smallfiles = ....
[10:37:28] <skot> yes, see the docs on the config file settings
[10:37:35] <skot> it is basically the ini file format
[10:39:17] <skot> http://www.mongodb.org/display/DOCS/File+Based+Configuration
[10:39:53] <skot> http://www.mongodb.org/display/DOCS/Command+Line+Parameters
[11:32:29] <omid8bimo> im trying to run repair on my database but i get this error: "errmsg" : "clone failed for kookoja with error: query failed kookoja.system.namespaces"
[11:32:33] <omid8bimo> any ideas why? (this server is secondary in a replicaSet)
[11:41:06] <multiHYP> hi
[11:45:51] <gigo1980> omid8bimo: do you have an arbitter ?
[11:49:57] <omid8bimo> gigo1980: yes
[11:50:30] <gigo1980> on slave did you clean the data on you hdd ?
[11:50:58] <omid8bimo> gigo1980: no. should i? i must run the same command?
[11:51:30] <gigo1980> if i have a problem with a node. i stop the node, delete the hdd and than start the node again
[11:52:13] <gigo1980> but this only work if you dont have to much data on your disk
[11:58:11] <omid8bimo> mm i'll try that
[11:58:17] <omid8bimo> thanks
[13:05:03] <Progster> I don't understand. I'm connecting to my db using my user name and password, and yet when I execute show dbs I get a message same listDatabases failed, "need to login"
[13:05:38] <Derick> are you admin user?
[13:06:49] <Progster> I should be yes
[13:47:56] <infinitiguy> does a journal in mongo grow indefinately?
[13:48:11] <infinitiguy> currently I have 3 1gb journal files - what causes this to grow and is there any way to limit it?
[13:49:05] <Guest42665> journal is pre-allocated and not growing
[13:49:07] <Guest42665> so you are wrong
[13:49:31] <skot> no, it only has 1-3 files at a time and roll over as new ones are created and deleted.
[13:49:39] <dcrosta> infinitiguy: the journal files are rotated as they age out. in general you'll have 3 or 4 gigs of journal files, but possibly a bit more under heavy write load
[13:50:56] <infinitiguy> what's the max age of a journal?
[13:51:04] <infinitiguy> I guess - when do they age out?
[13:51:31] <infinitiguy> it's the "possibly a bit more" I'm interested in under heavy load
[13:54:15] <dcrosta> infinitiguy: I'd have to check the code to be exactly sure, but I believe the case is when lots of data is written between main data file flushes. journal files will be kept around until all the data they represent has been flushed to the main files.
[13:54:23] <infinitiguy> gotcha
[13:54:47] <infinitiguy> one more question: what is the order that a sharded environment should be started? config server - mongos router - mongo databases?
[13:55:00] <infinitiguy> or does it not matter much and they'll converge once everything is online eventually?
[13:58:19] <infinitiguy> actually - one more after that - is there a way to see how much whitespace is allocated compared to actual data within a database? Our mongo has about 6.5-7GB of DB files allocated but looking in the collections it only says about 1.5gb of data
[14:12:58] <infinitiguy> is capped collection number of records?
[14:13:00] <infinitiguy> or size on disk?
[14:13:12] <infinitiguy> db.createCollection("results", {capped:true, size:20769803776}) for example
[14:13:23] <infinitiguy> 20 billion something (bytes or records?)
[14:13:32] <dcrosta> infinitiguy: both, or rather, whichever is hit first. there's an implicit limit of 2**31-1 documents if you don't specify "max"
[14:13:40] <dcrosta> size is on-disk size, max is number of documents
[14:14:15] <dcrosta> see http://www.mongodb.org/display/DOCS/Capped+Collections#CappedCollections-Options
[14:14:21] <infinitiguy> if we have a collection specified for 20gb how could the mongo DB grow to 100gb?
[14:14:44] <dcrosta> how are you measuring 100gb?
[14:15:37] <remonvv> or the 20gb
[14:16:07] <infinitiguy> du -hs
[14:16:11] <infinitiguy> there's 50 2gb files on disk
[14:16:45] <remonvv> Yes but a) How did you conclude your data should actually take 20gb of space and b) why do you think the 100Gb reserved for data files is wrong?
[14:17:02] <dcrosta> infinitiguy: could be other databases (the "local" databse can be large if you have a lot of free space and are using replica sets), or other collections in your "main" database
[14:18:20] <infinitiguy> i guess those are good questions - I'm not familiar with how this environment was setup - just throwing a question from over the wall here in the office
[14:18:30] <remonvv> If you want to store X bytes of data in MongoDB it will not result in X bytes of disk space used. Every document includes all field names as well so that alone may double or triple space used. Then there's things like index files, oplog, journaling, preallocation of data files, padding, etc.
[14:18:39] <remonvv> Okay, well there you go ;)
[14:19:13] <remonvv> I can give a more detailed description but most of it is pretty well documented.
[14:19:22] <infinitiguy> nope that's good enough :)
[14:19:28] <remonvv> Great ;)
[14:19:32] <infinitiguy> since im on a roll… I just noticed something weird
[14:19:37] <remonvv> Shoot.
[14:19:38] <infinitiguy> I restarted my mongos isntance
[14:19:44] <infinitiguy> and now I can't seem to connect
[14:19:45] <infinitiguy> Tue Jun 19 10:16:32 error: no args for --configdb
[14:19:50] <infinitiguy> when I use mongos —port 30000
[14:19:57] <infinitiguy> this used to work yesterday
[14:20:10] <remonvv> Unlikely, you can't start mongos without a --configdb parameter
[14:20:23] <remonvv> mongos is a router process, it needs to figure out cluster topology from a config server
[14:20:31] <infinitiguy> the way im starting mongos is..
[14:20:39] <infinitiguy> sudo /usr/bin/mongos -f /etc/mongod_server.conf
[14:20:54] <infinitiguy> configdb = server.domain.dom:20000
[14:20:57] <infinitiguy> specified within
[14:21:10] <infinitiguy> which is why im puzzled - since it worked yesterday
[14:21:28] <remonvv> Ah okay, no errors?
[14:21:34] <infinitiguy> the only difference was today I initially tried to start it using an init script (copied and changed from the default mongo init) and I got that error so I tried my old way.
[14:21:38] <infinitiguy> nope - let me double check the log
[14:22:45] <remonvv> Do that. Also check if you mongos is up or not.
[14:24:16] <infinitiguy> nope - no errors
[14:24:20] <infinitiguy> and mongos is indeed up
[14:24:23] <infinitiguy> very strange
[14:24:32] <infinitiguy> Tue Jun 19 10:14:50 [websvr] admin web console waiting for connections on port 31000
[14:24:32] <infinitiguy> Tue Jun 19 10:14:50 [Balancer] about to contact config servers and shards
[14:24:32] <infinitiguy> Tue Jun 19 10:14:50 [mongosMain] waiting for connections on port 30000
[14:24:32] <infinitiguy> Tue Jun 19 10:14:50 [Balancer] config servers and shards contacted successfully
[14:25:45] <remonvv> looks up
[14:26:06] <remonvv> and mongo localhost:30000 fails?
[14:26:31] <infinitiguy> weird
[14:26:32] <infinitiguy> that works fine
[14:26:33] <infinitiguy> mongo localhost:30000
[14:26:34] <infinitiguy> MongoDB shell version: 2.0.4
[14:26:34] <infinitiguy> connecting to: localhost:30000/test
[14:26:34] <infinitiguy> mongos>
[14:26:44] <infinitiguy> I could've sworn that I was using mongos —port 30000 to connect
[14:26:44] <remonvv> okay, so what doesn't connect? your app?
[14:26:47] <rydgel> Hey guys, just to be sure. If I have to repair a sharded database, I would have to do a db.repairDatabase() on each shard, right?
[14:26:52] <infinitiguy> me - i think im just stupid today
[14:27:08] <remonvv> rydgel, no, do it through mongos
[14:27:16] <infinitiguy> yep - im stupid
[14:27:19] <infinitiguy> looking at wrong history
[14:27:20] <infinitiguy> 150 mongo --port 30000
[14:27:27] <remonvv> rydgel, it'll invoke it automatically on the shards that are required to know about the repair
[14:27:30] <rydgel> remonvv:ok, but what if I want to repair only one shard?
[14:27:33] <remonvv> infinitiguy, i forgive you ;)
[14:27:36] <infinitiguy> instead of obviously mistyped 205 mongos --port 30000
[14:27:37] <infinitiguy> sigh
[14:27:41] <infinitiguy> it will be one of these days
[14:28:04] <remonvv> rydgel, then do it on that shard :) it's usually better to just do it through mongos though. It's pretty quick if you have journaling enabled.
[14:28:21] <rydgel> remonvv: ok thanks guy
[14:28:29] <remonvv> rydgel, no problem sir
[14:29:45] <remonvv> rydgel, shards don't know they're part of a sharded setup so all operations that do not affect cluster topology in some way can be invoked on a local shard.
[14:31:58] <rydgel> remonvv: ok I see. In fact I’ve got a problem because my chunks do not move anymore. I've got a chunk which is somewhat unmovable and mongos try to move it forever
[14:32:32] <rydgel> remonvv: so I guess something is broken on the source shard, that's why I'm trying to repair it
[14:39:14] <spillere> when I create a new user, i run this query http://pastie.org/4114789
[14:39:41] <spillere> to add another photo, would I add an update,save? how would be the new query to add a new item to photos?
[14:47:28] <remonvv> rydgel, what's the exact error?
[14:48:18] <remonvv> spillere, db.col.update({username:<YOUR USERNAME>}, {$push:{photos:<YOUR PHOTO URL>}})
[14:53:51] <spillere> remonvv inside photos i wanna insert, filename, name, date
[14:54:10] <spillere> i'm trying db.dataz.update({'username':cookie_usr}, {'$push':{'photos': { 'filename': filename_out, 'caption': caption, 'created': date}}})
[14:54:15] <spillere> but i guess its not the right way
[14:55:35] <remonvv> yeha it is, that works
[14:56:14] <remonvv> if it doesn't work it's because {'username':cookie_usr} doesn't match with any data
[14:56:17] <remonvv> try a find({'username':cookie_usr})
[14:56:21] <infinitiguy> in a sharded environment are all deletes done through the mongos service now?
[14:56:39] <infinitiguy> in other words - if I have a collection and im trying to delete all where somenumber=391
[14:56:49] <infinitiguy> should I run it on each mongod instance, or only mongos
[14:57:30] <remonvv> only mongos
[14:57:40] <remonvv> both solutions work but doing it through mongos is better
[14:58:04] <remonvv> doing things through mongos means the whole business of talking to a sharded cluster rather than a single mongod is hidden
[14:58:08] <Guest42665> why would you want to run this locally?
[14:58:22] <remonvv> you don't
[14:58:38] <remonvv> there are a few edge cases where it might be more efficient but that generally doesn't apply to deletes.
[14:58:46] <infinitiguy> and is the process to do something like this db.collectionname.remove({idname:391});
[14:58:55] <infinitiguy> if i want to delete all from collection where idname = 391
[15:00:20] <spillere> remonvv yeah its working,ty!
[15:00:44] <remonvv> ;)
[15:00:53] <remonvv> infinitiguy, yep
[15:00:56] <remonvv> through mongos
[15:01:04] <infinitiguy> cool thanks
[15:01:09] <remonvv> yvw
[15:05:18] <rydgel> remonvv: ok, here is the log from the error I got. Very strange I don’t understand. http://pastebin.com/1YXSmkGF
[15:06:02] <remonvv> rydgel, your data is corrupt.
[15:06:50] <remonvv> rydgel, your data header is malformed. Repair is your only possible route but might fail. Alternative is to find the broken object and delete it.
[15:12:53] <rydgel> remonvv: did you have any hint on how I can find this object?
[15:14:32] <rydgel> remonvv: there is a mention of ObjectId('4f49803dbf30e2a54f000100') but I cant find it with find()
[15:18:49] <Scyllinice> What's the right way to create a unique index on embedded documents? I want a user to have only unique "likes" (category_name, and name) but I want other users to be able to like the same things. So I want the unique index to apply only within the context of the user itself, not the entire user collection. I'm not sure how to go about it.
[15:22:23] <skot> I think you want to only add it the user doesn't already have it.
[15:22:43] <skot> make the query such the if the user has it already it doesn't return any docs for the update
[15:23:10] <skot> If you provide a sample doc in gist/pastebin/etc I can help you fashion the query.
[15:28:08] <Scyllinice> skot: Scrubbed it to contain only the bits I important: https://gist.github.com/34c8b054b3bbd3c2a3dd
[15:28:29] <Scyllinice> You
[15:28:31] <Scyllinice> oops
[15:28:43] <Scyllinice> You'll notice some duplication. That's what I'm trying to prevent
[15:36:25] <remonvv> Scyllinice, why would you want to do that with an index? Just only add it when it's not already there.
[15:36:58] <remonvv> e.g. db.col.update({likes:{$ne:"this one"}}, {$push:{likes:"this one"}})
[15:37:09] <remonvv> or use $addToSet if your usecase allows for it
[15:59:13] <rydgel> remonvv: seems like a ghost object, found it but can remove it in shell
[16:00:21] <remonvv> rydgel, tried repairing the db?
[16:00:49] <rydgel> remonvv: not yet, I tried this first
[16:00:58] <remonvv> rydgel, alright, journaling enabled?
[16:01:06] <rydgel> yup
[16:01:31] <remonvv> ok, if the repair doesn't fix it you probably want to report it
[16:01:50] <rydgel> ok
[16:02:02] <rydgel> remonvv: no way to skip a chunk with the balancer yet?
[16:02:29] <remonvv> rydgel, not an official feature but you can remove the chunk from config.chunks collection
[16:02:41] <remonvv> which is **!!!!!! NOT RECOMMENDED !!!!!!***
[16:03:19] <remonvv> try repairing the db, if that doesn't solve it report the issue and wait for a 10gen response
[16:03:39] <remonvv> if that doesn't work try exporting the entire database, set up a new cluster and import it again
[16:03:54] <remonvv> if that doesn't work you can try and fiddle with cluster meta data
[16:04:07] <remonvv> use config; db.chunks.find();
[16:04:27] <remonvv> you can have a look around to get a better understanding of how things work but i'd not update anything there
[16:04:54] <rydgel> remonvv: thanks a lot for all your help
[16:05:10] <remonvv> rydgel, no problem ;)
[16:05:28] <remonvv> i'm off, bb
[16:05:34] <rydgel> rydgel: will try repairing, not tonight, but will let you know another day
[16:05:42] <rydgel> oops
[16:23:30] <Goopyo> Q: I want to write a ranking algorithm that calculates P/L on trades for all users/trades. If I have say 50,000 trades and new trades come in regularly and I want read all 50,000 documents to python process them. I have 2 ways of doing this:
[16:23:49] <Goopyo> 1. Read all necessary since last read to memcache, process from there
[16:24:11] <Goopyo> 2. Perform full 50,000 document read and run calculations
[16:24:30] <Goopyo> the rankings are updated every 5 minutes. Would 2 be to heavy for the database?
[16:44:21] <Guest42665> goopyo: goandmeasureit
[16:46:42] <Goopyo> Guest42665: yeah doing that now.
[17:16:21] <richardraseley> So, is Mongo's query language officially called anything (e.g. Mongo Query Language, MQL) or is that just what people refer to it as?
[17:17:03] <Guest42665> it's JSON
[17:17:15] <Guest42665> and a name does not make it better
[17:18:17] <richardraseley> Well, it isn't JSON, it is written in JSON.
[17:18:25] <richardraseley> Fine distinction - granted.
[17:18:48] <richardraseley> Just wondering if it had a specific name outside the larger standard.
[17:18:56] <Guest42665> hairsplitter
[17:19:04] <Guest42665> call it bob
[17:19:14] <richardraseley> gesundheit
[17:19:35] <richardraseley> So the answer to my question is "no".
[17:19:36] <richardraseley> Thanks.
[17:20:12] <Guest42665> Gerne
[18:00:11] <zak_> how do i contribute to the mongodb community?
[18:02:04] <alnewkirk> zak_: depends, what are you good at?
[18:02:53] <zak_> i would love to help coding wise
[18:03:51] <alnewkirk> in what language?
[18:04:11] <alnewkirk> im not affiliated with MongoDB at all so you need not answer me :)
[18:04:16] <tystr> https://github.com/mongodb/mongo
[18:04:33] <alnewkirk> I can tell you that MongoDB is a company also and you can apply for a job there
[18:05:05] <tystr> http://www.mongodb.org/display/DOCS/Database+Internals
[19:15:28] <doug> so i've got a few thousand documents in this one db
[19:15:35] <doug> i'd like to turn those into a loadable fixture for integration testing.
[19:15:42] <doug> what's the most straightforward way to do that?
[19:15:53] <doug> i don't think i want to use the dump/restore binaries...
[19:16:19] <doug> just so i don't have to worry about those dependencies when porting this stuff to a new architecture
[19:16:38] <stymo> if you just do import/ export, you will just get straight JSON
[19:16:42] <stymo> any reason not to do that?
[19:17:13] <Progster> I'm using mongo as a lookup for a set of values (i.e. distinct). It seems wasteful to create an objectid for each item since each item is already guaranteed to be unique. But when I try to set the ObjectId to my string, I get a mesage saying, "Illegal ObjectId format". Is what I'm doing a bad idea in general? Or am I just misunderstanding something about ObjectIds?
[19:17:56] <doug> import/export are handled by separate binaries, not part of the mongo API, right?
[19:18:07] <Progster> doug: yeah, mongoimport and mongoexport
[19:18:09] <doug> i'd like to avoid having to call the binaries if i could
[19:18:20] <doug> so i don't have to worry about those dependencies when porting this stuff to a new architecture
[19:28:36] <stymo> when doing an upsert, is there a way to tell if you did an insert or update?
[19:28:55] <stymo> any flag you can get back from mongo?
[19:34:29] <ranman> stymo: maybe check the oplog?
[19:36:44] <ranman> stymo: do db.getLastErrorObj()
[19:36:50] <ranman> and there is a field that says
[19:36:59] <ranman> "UdateExisting: True/False"
[19:37:22] <ranman> s/udate/Update
[20:06:08] <sklivvz> Hello. Is there any way of using strongly typed c# classes with MongoDB?
[20:06:25] <sklivvz> Maybe via some library or extension?
[20:07:44] <Vile1> Guys, mongo 1.8.4 replca set switches primary node every few minutes. Is there a way to lock one one as primary?
[20:08:32] <drudge> how many members?
[20:09:57] <Vile1> drudge: 2 + arbiter
[20:11:18] <drudge> Vile1: you could use freeze() + stepDown
[20:11:50] <drudge> but i'd try to figure out why the arbiter was switching
[20:11:56] <drudge> anything in the logs?
[20:11:57] <Vile1> freeze will only lock it for a while, if i'm not mistaken
[20:14:09] <mediocretes> what happens when you give one more priority?
[20:14:13] <Vile1> nothing suspicious. probably has something to do with load
[20:14:25] <Vile1> mediocretes: does 1.8 support priority?
[20:15:11] <mediocretes> hmm, thought it did, I could be wrong
[20:15:43] <mediocretes> I was wrong
[20:15:50] <mediocretes> 1.8 supports priority 0 or 1 only
[20:22:46] <armenb> Is there a straightforward way to merge two mongodb databases?
[21:29:38] <christopherbull> I'm trying to setup a db to load with --replset and --rest options, but it's currently being managed by a service script on AWS linux (redhat variant), can anyone point me to where I can set these config options either in the init script or in the config file?
[23:23:46] <christopherbull> I'm getting this error when I try to add a replica set node on ec2 "need most members up to reconfigure, not ok ". I've init'd the rs on the main node and just trying to add one more, can anyone shed any light?
[23:25:39] <zirpu> is the initial primary able to connect to the 2ndry?