PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 18th of June, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:06:53] <toothr> Zelest, i don't follow..
[00:07:19] <toothr> you can still have the domain and path split out in the document
[00:07:40] <Zelest> hmms.. i guess
[00:08:04] <Zelest> i wish to store things like "when was this domain last poked" in order to avoid hammering the same domain/server
[00:08:18] <Zelest> as well as, does the domain use www and/or is this url ssl
[00:08:27] <Zelest> i bet i can store all that, but it feels like a lot of duplicate data
[02:31:03] <owen1> using mongo with node. i get: "A Server or ReplSet instance cannot be shared across multiple Db instances". here is some of my code - https://github.com/oren/Y-U-NO-BIG/blob/master/food.js#L47
[02:31:26] <owen1> it happens only on the second request.
[02:32:03] <owen1> new mongodb.Db is the issue
[05:49:04] <Loch> Can anyone tell me the performance differences between a standard HDD and an SSD? Say I'm reading 1 row from 100000
[05:49:29] <Loch> Will the difference be noticeable to the end user?
[06:54:19] <Kage`> Anyone know of any PHP+MongoDB forums systems?
[09:07:09] <BadDesign> Why I receive the following error: "assertion: 9997 auth failed: { errmsg: "auth fails", ok: 0.0 }" even though my password and username are correct when I try to import a CSV file using mongoimport?
[09:08:00] <BadDesign> Do I need to enable some kind of remote access to my database, given that it is hosted on a MongoDB provider?
[09:10:15] <carsten_> mongod --auth
[09:11:57] <BadDesign> The command I'm using is: mongoimport -h myhostname:myport -d mydatabase -c mycollection -u myuser -p mypassword --file myfile.csv --type csv --headerline
[09:15:41] <BadDesign> carsten_: that command will run the mongodb daemon, in my case the MongoDB server is hosted remotely on MongoLabs, and I want to import some data into it
[09:16:02] <BadDesign> I don't think I need to run another daemon locally for what I'm trying to do
[09:16:33] <carsten_> check with your hosting provider - i am not jesus in order to know what your provider is doing
[09:17:08] <BadDesign> ok
[09:18:35] <jwilliams_> is it possible that mongodb's mapreduce to read data/ doc from a another remote mongodb server?
[09:19:05] <jwilliams_> or it can just read the data (collection, db) from the mongo server it resides on.
[09:19:06] <skot> jwilliams_: no, map/reduce is limited to a single collection which it is run on.
[09:19:14] <skot> just like any query
[09:19:29] <skot> BadDesign: can you post the output of the command to gist/pastebin/etc?
[09:19:29] <jwilliams_> skot: get it. thanks.
[09:20:07] <skot> BadDesign: please also show that the same username/password work via mongo (javascript shell)
[09:20:34] <skot> (also, what version is this of mongodump)
[09:21:55] <BadDesign> skot: I crated a new user on the database and it worked that way, I don't know why it doesn't let me use the already present user that is the same as the credentials for MongoLabs, nvm
[09:22:36] <skot> There is a difference between a user in the db and an admin user. The one mongolabs gave you was an admin user
[09:22:53] <skot> mongoimport only works with db users currently
[09:23:40] <BadDesign> skot: ok, thanks for the information
[10:27:10] <jwilliams_> i come across to read that python has lib like ming which can lazily help migrate schema. is there any similar tool in scala/ java for mongodb?
[11:29:07] <Fox`> hey, does anyone know how frequent the workshop conferences are in london?
[11:32:17] <Derick> Fox`: you mean "office hours" ?
[11:34:33] <skot> I think he means the workshops before the mongodb uk conferences
[11:35:05] <skot> Fox`: conferences in the UK happen once a year basically.
[11:35:18] <skot> I should say london
[11:35:22] <Derick> the user group is about monthly
[11:36:10] <skot> you can see them all here: http://www.mongodb.org/display/DOCS/Events
[12:05:26] <Fox`> argh really? damn i've missed the one in two days
[12:05:30] <Fox`> thanks. bye
[13:23:14] <stymo> Hi, I'm new to mongodb and nosql databases in general
[13:23:22] <stymo> I'm wondering if it's beneficial to use memcache with mongodb, or if it is unnecessary
[13:27:32] <carsten_> depends on your own usecases - the anwser is 42
[13:27:45] <stymo> haha sure, I realize that? :)
[13:28:18] <stymo> basically my app is just a web service that returns config info given an id
[13:28:32] <stymo> so it's mostly reads, and very simple ones
[13:28:48] <stymo> won't be a ton of data either, but will be a ton of reads
[13:28:57] <carsten_> benchmark it yourself and optimize if you have the need to do so
[13:29:10] <adamcom> I would try it without memcache first
[13:29:33] <adamcom> if it meets the requirements, leave out memcache - an extra layer = extra headaches
[13:29:45] <stymo> mongo does store frecent or freqent lookups in memory, right?
[13:30:16] <adamcom> assuming you have enough RAM, it will store your whole data set, and indexes in memory (assuming you access it)
[13:31:13] <stymo> my supervisor wants to use memcache, but I think it's just because we saw such improvements with memcache and a mysql db!
[13:31:37] <stymo> could it actually slow things down by having the extra layer?
[13:32:17] <carsten_> your supervisors should shut up if he has no expertise about the thing he is demanding
[13:32:38] <stymo> haha agreed :)
[13:33:29] <stymo> ok, well I think I'll try it, but make sure it's easily removed
[13:33:42] <stymo> and test performance with and without...
[13:33:58] <stymo> should be interesting. Thanks for the feedback!
[14:14:58] <tubbo> hi guys
[14:15:13] <tubbo> i'm trying to write a conversion script to get data out of mongodb and into postgresql, using ruby on rails and db migrations
[14:15:42] <tubbo> the last step that i need is to be able to convert mongo data into CSV, and then import that CSV into the ready-made postgres database (which i have already mapped all of the former mongo models to)
[14:17:58] <tubbo> when i ran the script (which basically runs mongoexport a bunch of times), i got this lovely error
[14:18:01] <tubbo> "Invalid BSON object type for CSV output: 10"
[14:18:05] <tubbo> what does that mean, and how can i remedy it?
[14:25:11] <gigo1980> hi, i have a 2 shard setup / is replicated… what can i do if shard A has 10GB and shard B has 100 GB ?
[14:31:54] <FerchoArg> hi
[14:32:45] <FerchoArg> I'´m using C# driver for mongodb. Is it correct to use the attribute [BsonId] in two different properties? It's a class whose primary key depends on two "columns" or fields
[14:39:11] <skot> No, you cannot do that. It is for a single field. You can create a compound index on two fields though.
[14:39:14] <Zelest> silly question, can I have a unique constraint on multiple fields? e.g, username+age+city together must be unique?
[14:39:24] <skot> es
[14:39:31] <kali> Zelest: yep
[14:39:34] <skot> s/es/yes
[14:39:39] <Zelest> ah, sweet
[14:39:55] <skot> http://www.mongodb.org/display/DOCS/Indexes
[14:41:26] <Zelest> i might finally have a solution to my crawler issues.. after about 6 years. :D
[14:41:29] <Zelest> <3 mongodb
[14:46:36] <FerchoArg> thanks skot :)
[15:06:41] <infinitiguy> Hello
[15:06:50] <infinitiguy> Do you need to have mongo configured in replica sets to do sharding?
[15:07:13] <skot> no, but it is highly suggested for data redundancy and reliability of your data
[15:07:18] <infinitiguy> example: http://www.mongodb.org/display/DOCS/Amazon+EC2+Quickstart#AmazonEC2Quickstart-DeployingaShardedConfiguration talks about having 9 instances - 3 per shard, whereas 2 are secondaries
[15:08:07] <infinitiguy> currently we're using xfs freezing and ec2 snapshots to backup our mongo server - would this method hold true if we just had 3 sharding servers with no replicas?
[15:08:41] <skot> yes, but during the freeze there would be no writes so probably not good if you don't have a replica set.
[15:09:50] <infinitiguy> hrm - im also wondering how sharding complicates the freeze. We do the freeze/snap at 10:15 via cron and I'm wondering what would happen if the freezes between 3 shard servers were slightly off (like a second)
[15:10:16] <skot> read the docs on a sharded backup, I can lookup the link if you can't find it.
[15:13:14] <opra> How do i find all documents with a field (Array) that is empty
[15:13:31] <NodeX> check out $type
[15:13:39] <NodeX> null might work too
[15:13:58] <opra> no
[15:14:02] <opra> it does'nt
[15:14:07] <opra> i think i have to use $where.
[15:17:22] <opra> db.collection.find( {field : [ ]})
[15:18:16] <skot> NodeX: $type is not what you want, but comparing to the empty array is correct.
[15:18:52] <skot> can you post a sample doc to gist/pastie?
[15:26:17] <opra> skot: i got it to work
[15:26:34] <opra> how about find but only for the last field in an array is that possible
[15:26:42] <NodeX> my bad, I thought $type had an empty proeprty
[15:26:46] <NodeX> property*
[15:28:27] <infinitiguy> how do you actually create the configdb when configuring mongo sharding? I'm pretty new to mongo and I'm just trying to set up a simple 2 node(or 3 node) shard.
[15:28:51] <infinitiguy> I read on the http://www.mongodb.org/display/DOCS/Configuring+Sharding page that you have to run a mongod --configsvr process for each config server. If you're only testing, you can use only one config server. For production, use three.
[15:29:11] <infinitiguy> but when I do, in the logs I get: (/data/configdb) does not exist, terminating
[15:29:46] <infinitiguy> I normally start mongo by pointing to a config file - maybe I should be putting config server stuff inside of that?
[15:30:24] <skot> you need to create the database directory first, as with any instance of mongod
[15:33:08] <skot> you can use a config file but the error will be not different
[15:36:30] <scoates> kchodorow_: FWIW, I am now "26.99GB resident" thanks to your advice, and faulting a whole lot less. Thanks (yet) again.
[15:36:41] <opra> skot: do you know how can find documents where a match only matches the last element in an array
[15:38:18] <infinitiguy> I'm trying to get a shard and config server on the same box - if I were to do it with config files I'd have to have 2 separate config files for the mongod processes - correct?
[15:38:41] <infinitiguy> there's no way to specify config ports and shard/db ports to be separate within the same config?
[15:39:13] <adamcom> infinitiguy: yup - you need separate config files - one for each mongod instance you plan to run (if you wish to use config files)
[15:39:20] <infinitiguy> got it
[15:45:21] <kchodorow_> scoates: awesome!
[15:46:14] <skot> opra: not last but by ordinal position
[15:46:39] <opra> skot: how can i do that
[15:47:01] <NodeX> foo.0
[15:47:07] <NodeX> foo.1 etc etc
[15:47:17] <opra> is foo.-1 possible
[15:47:22] <opra> to find the last?
[15:47:39] <zirpu> anyone have a good pointer to figuring out how much load/writes a 3 node replicaset can take? I've consistently busted mine.
[15:47:41] <NodeX> I dont think so
[15:47:54] <NodeX> try it and see!
[15:47:54] <scoates> opra: that would be very bad for performance
[15:48:17] <opra> db.collection.find( {field.0 : "match"} )
[15:48:21] <opra> is that correct?
[15:48:23] <skot> zirpu: it mostly depends on your disk/mem
[15:48:34] <skot> opra: depends on your doc, but yeah.
[15:48:55] <scoates> zirpu: a 3-node replicaset without sharding? if the nodes are all the same, a N-node replicaset without sharding should be just about the same as a 2-node replicaset without sharding (all writes go to the primary).
[15:49:04] <opra> b.datapaths.find( { ancestor_datapaths.0 : "unit"} )
[15:49:04] <opra> Mon Jun 18 11:46:16 SyntaxError: missing : after property id (shell):1
[15:49:09] <skot> opra: {field:["match", "foo", "bar"]}
[15:49:17] <skot> quotes
[15:49:47] <skot> b.datapaths.find( { "ancestor_datapaths.0" : "unit"} )
[15:50:16] <zirpu> scoates: ok. so i can write the the primary, but the 2ndaries can't keep up.
[15:50:42] <FerchoArg> I noticed that using the BSonId attribute, when I instantiate an object, it already has an ObjectId hash. Is there a way to distinguish objects that have been retrieved from mongodb and those that have been just instantiated with "new" ?
[15:50:44] <zirpu> i'm wondering how to figure out what the max write rate would be for this particular disk/mem combo is. it's hosted hardware.
[15:50:57] <scoates> what do you mean keep up? like, the replicas can't keep sync?
[15:51:12] <opra> is there any way i can limit the query to a specific array amount
[15:51:16] <opra> basically
[15:51:22] <skot> FerchoArg: null [BsonId] field?
[15:51:28] <opra> my field has: [one, two, three]
[15:51:29] <scoates> FerchoArg: the client/drive always generates the ID
[15:51:34] <zirpu> right. the eventually stopped and got stuck in resyncing. sometimes an rs102, but sometimes they just never finished resyncing.
[15:51:54] <opra> i want to match for all documents that have [one, two, three] and not [one, two, three, four]
[15:52:06] <skot> opra: $size
[15:52:12] <opra> ok
[15:52:14] <zirpu> I turned off writes to the primary over the weekend and left them syncing. eventually one of them was killed by the kernel OOM killer. not really sure why.
[15:52:19] <scoates> zirpu: ah, maybe your oplog is too small, then?
[15:52:34] <scoates> zirpu: but if it's a constant stream of too much data, you probably need to think about sharding.
[15:52:35] <zirpu> i have a 4Gb oplog. maybe i should double that?
[15:52:38] <scoates> (spread out the writes)
[15:52:54] <zirpu> right. it's log data.
[15:53:22] <FerchoArg> yes, I thought that the Oid was generating at the time of saving that object into Mongo.
[15:53:27] <scoates> zirpu: do you query the secondaries?
[15:53:50] <zirpu> scoates: not yet. haven't loaded enough data to start working on the actual processing.
[15:54:11] <scoates> zirpu: you might just be able to turn off indexing, then, if it's disk or cpu-bound (not network-bound)
[15:54:54] <scoates> we have a 12h delayed slave on one of our web nodes just in case we do something *really* stupid, and I turned off indexing because we will hopefully *never* have to query it; that helped CPU and flush.
[15:54:56] <zirpu> oh that makes sense. before populating the collections i created the indexes.
[15:54:59] <FerchoArg> I eventually want to save that object, but there is a method that needs to know if it came from db or if it has jus been created. It's not a big deal, I can use an auxiliar var, but I wanted to know if there was some way to figure that out from the instance
[15:55:25] <zirpu> scoates: is there an admin command to turn off indexing? or do i just drop the indicies?
[15:55:38] <scoates> zirpu: it's in the RS config
[15:55:46] <zirpu> cool. thanks
[15:56:09] <scoates> I'm on a train with questionable wifi right now, but it's definitely in the docs; pretty sure under rs config
[15:56:59] <scoates> IIRC you have to remove the node from the set and re-add it with createIndexes = false
[16:00:12] <zirpu> scoates: thanks. ii'm heading to bart myself now.
[16:03:46] <adamcom> zirpu: on the OOM killer front (from a while ago) you should configure some swap - give the OS some room to maneuver: http://www.mongodb.org/display/DOCS/The+Linux+Out+of+Memory+OOM+Killer
[16:03:56] <adamcom> and in case you are worries about having data in swap:
[16:04:04] <adamcom> http://www.mongodb.org/display/DOCS/Production+Notes#ProductionNotes-Swap
[16:08:06] <carsten_> OOMs are for wimps
[16:08:51] <ninja_p> carsten *ignore
[16:09:20] <tubbo> what's the best way to convert BSON::ObjectId references into numerical references that can be used inside a relational database
[16:10:03] <skot> It is 24 bytes so probably better to convert to a hex string
[16:11:45] <mitsuhiko> say a config server goes down
[16:11:51] <mitsuhiko> how do you bring it back up?
[16:13:27] <skot> what kind of goes down do you mean?
[16:13:36] <skot> do you need to replace it or just restart?
[16:14:35] <skot> here are some related docs: http://www.mongodb.org/display/DOCS/Changing+Config+Servers
[16:16:55] <mitsuhiko> skot: that sounds scary
[16:17:34] <mitsuhiko> so basically, not really a way to do that without causing major disruptions on the environment
[16:18:02] <skot> it depends, are the names staying the same?
[16:18:33] <skot> if you started up with ip addresses then it is a big deal, if you used aliases or names then it is more manageable
[16:18:49] <mitsuhiko> skot: no, we thankfully use dns
[16:19:58] <skot> then you just need to bring up a new instance and point dns to it, and make sure it has all the data there already as config server don't replication from scratch
[16:40:56] <iamchrisf> anyone know when jira.mongodb.org is coming back online?
[16:42:56] <kchodorow_> no, we're working on it
[16:44:25] <iamchrisf> thx
[16:47:09] <remonvv> exit
[16:47:14] <remonvv> woops
[16:47:16] <remonvv> wrong window
[16:49:34] <rhqq> hey, when i would like to change primary node should i do db.adminCommand( { replSetStepDown : some_time_in_sec } ) ?
[16:50:57] <carsten_> reconfirmation of documentation needed?
[16:51:09] <rhqq> im not sure if i understood it properly ;)
[16:51:40] <rhqq> better to doublecheck than revive production db :D
[16:53:52] <rhqq> how to promote certain node?
[16:54:16] <rhqq> i couldnt find it in docs..
[16:55:38] <CarstenLunch> http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary
[16:56:12] <rhqq> will that automatically make my current primary replica?
[16:57:41] <CarstenLunch> i suggest you read the page once more
[16:58:44] <rhqq> uhh, sure, my bad. thanks!
[17:08:12] <jstout24> meaning, i have an action and want to associate any data to it.. ie, `db.events.insert({ name: 'impression', data: { visitor: { $id: "…" }, template: { $id: "…" }, some_other_dynamic_field_we_want_to_track: { $id: "…" } });`
[17:09:48] <jstout24> oops
[17:10:13] <jstout24> i'm trying to create "dynamic" event tracking and above would be how i want to do it
[17:10:28] <jstout24> but i don't know the best and fastest way to index it
[17:41:17] <stefancrs> morning
[17:42:27] <stefancrs> I'm running mongodb under ubuntu in vmware in windows... If for some reason, the host OS (windows) shuts down, mongodb won't get a clean shutdown and will leave it's lock file and I should run a mongod --repair
[17:43:00] <stefancrs> How do I automate that process so that everything starts up again? The virtual machine (ubuntu) starts up properly when windows boots, should I just always remove the lock file and do a --repair before starting the service upon boot?
[17:43:17] <stefancrs> (this is for a system that only will be used a few days)
[17:45:02] <tubbo> how do i apply the results of a mongodump to my database
[17:45:58] <bjori> tubbo: mongorestore
[17:45:59] <tubbo> i have this whole dump/ directory with each table stored as a .bson file
[17:46:00] <tubbo> oh ok
[17:49:47] <stefancrs> hm, maybe I could just enable journaling...
[17:54:21] <Mikero> Hey, how would one create a relation like this (http://mikero.nl/gtkgrab/caps/cbb1bc.png) in CouchDB? I'm currently have a character document with the juntion table "characters_skill" embedded with a reference to the actual "skill" table.
[17:57:46] <stefancrs> Mikero: couchdb? :)
[18:00:00] <Mikero> stefancrs: Oh, shi- that's supposed to be MongoDB :p I was also doing some research on CouchDB right now.
[18:00:29] <CarstenLunch> please read http://www.mongodb.org/display/DOCS/Schema+Design
[18:02:24] <stefancrs> Mikero: for what it's worth, what you currently do is probably fine. What's the purpose of the reference and the separate skills collection?
[18:02:43] <personal> Hi all, I have a question on a data structure, was wondering if I could get any recommendations… I'm new to mongodb and don't know what to do without a subquery.
[18:03:02] <CarstenLunch> personal: two queries
[18:04:01] <stefancrs> CarstenLunch: the question is fairly broad... :)
[18:04:25] <personal> Cartsen: Is there any sort of "Contains" equivalent? -- I can give you a specific example and what I was thinking to make it quick.
[18:04:28] <stefancrs> personal: that totally depends on what data you needs to store, what you need to query, and how you store it,.
[18:04:38] <stefancrs> personal: give example
[18:04:51] <CarstenLunch> personal: please no high-level blather - please ask something specific one can answer...
[18:05:03] <stefancrs> personal: quick answer is yes, but the answer is useless to your specific case
[18:06:48] <personal> Let's say I'm making a Recipe book. I've got 12,000 recipes. 1,000 unique ingredients. I want to be able say "Find all recipes that contain only ingredients x, y, z" -- Currently I have three collections, 1 with each recipe to be made, 1 with a table full of the ingredients and their relationship to the recipe and how many parts of each go in, and 1 with a unique list of ingredients.
[18:07:16] <CarstenLunch> please read http://www.mongodb.org/display/DOCS/Schema+Design
[18:07:24] <stefancrs> hehe
[18:07:29] <stefancrs> personal: do you need the second collection there?
[18:07:42] <Mikero> stefancrs: The reference to the skills collection is to get the name and description of the skill it's the same for everyone, the experience is not. So to avoid duplication I'd rather put it in a seperate table.
[18:07:43] <CarstenLunch> back to work -
[18:08:06] <stefancrs> personal: what for? you can do queries like db.recipes.find({"ingredients" : "fish"})
[18:08:28] <stefancrs> Mikero: go for duplication instead
[18:08:37] <stefancrs> Mikero: store data in the way you want to query it
[18:09:07] <personal> stefan, Let's say someone has inputted EVERY ingredient that they have in their kitchen and I want to return EVERY recipe they can make.
[18:09:36] <personal> I'm searching off of perhaps hundreds of ingredients at a time.
[18:12:58] <personal> Select all recipes from the database that contains only eggs, sugar, milk, cheese, salt, pepper, bananas, tomatos, yogurt, spinach, lettuce, white bread, wheat bread….
[18:14:12] <Mikero> stefancrs: Thanks, I'll do that. I'm still thinking in the RDBMS ways of doing things. I hope I'll just "get it" after a week or so, ha. Thanks again for the help, you might see me again later this week. :p
[18:15:39] <infinitiguy> If I'm using a mongo keyfile for authentication - do I just specify keyfile = /path/to/keyfile in my config
[18:15:55] <infinitiguy> or do i need something like keyfile = true and keyfilepath = /path/to/keyfile
[18:19:02] <stefancrs> personal: actually I don't know how to do that from the top of my head in any db...
[18:19:08] <personal> I guess I don't think I fully understand the power of embedded documents for relationships. I'm not seeing a way to filter out recipes that have ingredients that I don't have. It could just be my brain is fried atm, lol. I'm sorry.
[18:19:16] <stefancrs> personal: you want to exclude all recipes that have any ingredient that is NOT in the list
[18:19:22] <personal> Exact;y/
[18:19:28] <personal> Exactly*
[18:20:13] <stefancrs> I mean... basically anything you'd bake would contain water or milk
[18:20:38] <stefancrs> and you don't want all kinds of bread if you want to make a bread with raisin in it
[18:21:06] <stefancrs> tricky one
[18:21:34] <personal> The details of the ingredients such as the types of bread aren't too important.
[18:21:53] <stefancrs> no but I think I get the problem
[18:21:58] <personal> I was thinking there may be a way to make yet another collection as a lookup matrix.
[18:22:10] <stefancrs> say there are three types of ingredients in the world: a,b,c
[18:22:16] <personal> … but I don't fully understand mongo, so I'm not sure if that's the wrong way to attack.
[18:22:21] <stefancrs> if someone has got a,b at home
[18:22:34] <personal> They can make recipe a, b, or a,b or b,a
[18:22:36] <stefancrs> you want to display all recipes containing a and/or b but no other ingredients
[18:22:42] <personal> exactly.
[18:22:48] <stefancrs> ok well good luck! :)
[18:23:43] <personal> That means you don't have a language specific way of doing a subset like that? Uhoh
[18:23:46] <personal> lol
[18:25:52] <stefancrs> I wouldn't know how to do it in SQL either
[18:28:19] <zirpu> is there a way to determine if and how far behind replicas are from the primary? is it just the difference in optime.t between the primary and secondaries?
[18:30:01] <dgottlieb> zirpu: I believe the difference in optime is the only real metric
[18:30:34] <zirpu> good enough. i'm just looking for a way to backoff loading data into a replicaset when it gets too far behind.
[18:33:40] <personal> SELECT * FROM recipes WHERE (SELECT count() FROM recipe_ingredients WHERE recipe_ingredients.recipe_id=recipe.id AND ingredient IN list) = (SELECT count() FROM recipe_ingredients WHERE recipe_ingredients.recipe_id=recipe.id)
[18:34:05] <zirpu> 0 rows.
[18:35:29] <personal> Stefan, that's an example of a way to do it in SQL. You have to count all the ingredients that are in the recipe that are also in the list, and compare that to the total. That's probably the easiest way to check for exclusion of unknown ingredients that are missing (that I can think of)
[18:36:19] <kchodorow_> iamchrisf: should be up again
[18:36:37] <personal> But my brain is freaking fried trying to find an equivalent in mongo, >_>
[18:38:29] <iamchrisf> Yep. I'm in. Thx
[18:41:41] <illsci> In a 3 node replica set, if one node goes down, does the replica set become read only?
[18:44:18] <stefancrs> personal: :)
[19:02:03] <dijonyummy> what do you guys like most about mongodb? is the the easy programming object paradigm, no need orm, or sql, statement, open close/result sets, etc? or the scalability?
[19:03:10] <stefancrs> dijonyummy: what's the point of your question? for me its the ease of progrmming using something that is very "javascripty", that it's schemaless and the performance
[19:03:36] <multi_io> how can I persist all documents that were stored in "NORMAL" WriteConcern, i.e. in mamory only?
[19:03:39] <multi_io> *memory
[19:04:00] <personal> I like how easy it is to save records from python with pymongo… coll = db['collection']… coll.save(dictionary)
[19:04:03] <dijonyummy> just want to know whats best about it from a programmers perspective, whats javascripty about it? the json?
[19:04:34] <stefancrs> dijonyummy: the json, the queries and the fact that it uses javascript for certain operations....
[19:04:46] <stefancrs> dijonyummy: you can basically execute javascript code within a query
[19:05:02] <stefancrs> dijonyummy: which then is dealt with by mongo, not your client
[19:05:34] <dijonyummy> i actually like sql, but its the other stuff, skeletal code, orm, etc thats a pain, and which db syntax, oracle, postgres, etc i dont like. when doing my own personal projecgs i dont have much time. was thinking maybe nosql like mongo is great for that?
[19:05:40] <stefancrs> dijonyummy: but "what's javascripty about it" makes me think you should probably just read an introductory article about mongodb instead of asking people here
[19:06:06] <stefancrs> dijonyummy: yes mongo is maybe great for that...
[19:06:15] <stefancrs> dijonyummy: we can't answer those questions for you :)
[19:34:54] <infinitiguy> when doing mongo sharding how are application servers configured to talk to the DB? Should it be talking to whatever box runs mongos?
[19:35:00] <infinitiguy> or can it talk to any of the mongo nodes?
[19:35:16] <infinitiguy> im trying to figure out how I go from 1 mongo server to many mongo servers on the app side
[19:38:13] <Derick> infinitiguy: they need to talk to mongos
[19:38:31] <infinitiguy> ok
[19:38:41] <infinitiguy> so there should only ever be 1 mongos server?
[19:39:58] <wereHamster> no, you can have as many of those as you want
[19:42:37] <acidjazz> is it possible to rename a database
[19:43:08] <acidjazz> i guess copy/delete
[19:57:23] <skot> acidjazz: there is no rename db operation, you need to copy/delete or dump/restore
[19:59:46] <acidjazz> skot: got it tnx
[20:27:18] <infinitiguy> what does … mean on the mongo command line
[20:27:24] <infinitiguy> sometimes some commands just return ...
[20:27:28] <infinitiguy> is it a syntax error?
[20:27:52] <infinitiguy> for example: db.runCommand( { shardcollection : "dbname.collectionname", key : {"_id": 1})
[20:27:54] <infinitiguy> returns ...
[20:27:59] <infinitiguy> and i hit enter and I get another ...
[20:28:05] <infinitiguy> then one more enter returns me to mongos>
[20:30:37] <skot> the "…" means it expects you to finish the statement.
[20:30:46] <skot> you are missing a closing }
[20:36:14] <infinitiguy> damn.
[20:36:36] <infinitiguy> i cant believe i missed that
[20:37:27] <macabre> any mongo/django users?
[20:38:21] <sirpengi> macabre: yup
[20:39:20] <macabre> sirpengi: from the 3 different mongo drivers ive found online it seems like each one limits django in one way or another.. sessions, admin, orm, is this true?
[20:39:48] <macabre> its a general question as im just now starting to do some research
[20:41:05] <infinitiguy> can you change chunk size after it's established?
[20:41:09] <infinitiguy> im testing with a 1MB chunk
[20:41:14] <infinitiguy> and I want to move to a larger chunk size
[20:41:30] <sirpengi> oh, when I use django with mongodb it's only for custom parts
[20:41:42] <sirpengi> the auth and session I keep in some rdbms
[20:41:53] <sirpengi> so it's django+postgres+mongo
[20:42:00] <macabre> sirpengi: ahh i see :)
[20:42:56] <sirpengi> the django orm is tied to rdbms world, so the builtin session/admin stuff needs to be too
[20:43:16] <sirpengi> I think there's a project somewhere to enable it for nosql solutions, but I don't believe any of them are production stable
[20:43:27] <sirpengi> (or support the full features of the current orm)
[20:48:15] <infinitiguy> anyone have any thoughts on being able to change the chunk size? I over-rode the default of 64m and set it as 1M. I'd like to put it back to 64m. Can I just remove the specification in the config file and restart mongos?
[22:11:38] <skot> infinitiguy: that setting doesn't have any impact once the initial startup.
[22:12:26] <skot> What docs are in the config db and settings collection? db.settings.find() in the config db connected via mongos.
[23:45:25] <Goopyo> since keys cant contain periods '.', is there a way to convert to replace that . with another character on the connection level?
[23:45:48] <Goopyo> i.e. not have to go back and modify all queries
[23:54:54] <dstorrs> Goopyo: _id.username is a perfectly legal key if your obj is (e.g.) { _id : { username : 'bob', user_id : 7 } ... }