PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 16th of July, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:30:06] <circlicious> hi
[04:43:06] <neil__g> hello
[04:55:07] <circlicious> was thinking of using mongodb for my app, stil skeptical. googled, read some posts against mongo, some say rdbms is best and what not. :/
[04:56:10] <circlicious> i did a quick test here actually, 10k writes in mysql took 300 seconds, in mongo was 0.1sec
[04:56:30] <circlicious> i think mongo will suit well in my use case, but quite confusd.
[04:56:38] <neil__g> :)
[04:57:54] <circlicious> some say UPDATEs are harsh with mongo, but i think i just need to read/insert
[04:59:13] <circlicious> reasons i want to go with mongodb? my schema might change frequently, and i think i am going to get a lot of writes. while, i can cache the read.
[05:05:41] <neil__g> sounds good
[05:07:12] <circlicious> neil__g: i think i remember reading mongodb does not really writes to disk thats why its so fast. and there is an option to make sure it writes to disk although then i would loose benefits. what is that option ? i really wanna make sure that i dont loose data.
[05:08:08] <neil__g> safe write, i believe
[05:08:17] <neil__g> what client are you using?
[05:09:19] <circlicious> um, 'client' ?
[05:09:26] <DigitalKiwi> driver
[05:09:27] <circlicious> i just used $ mongo
[05:09:32] <DigitalKiwi> what language driver
[05:09:39] <circlicious> php-mongo ?
[05:10:04] <circlicious> http://www.php.net/manual/en/mongo.tutorial.php
[05:14:00] <circlicious> i guess i need to pass ['safe' => trye] as the second argument to ->insrt()
[05:17:06] <circlicious> lol ?
[05:17:36] <circlicious> i added ['safe' => true] , and did that make it slow? for 10k writes the speed jumped from 0.1s to 0.7s
[05:20:05] <circlicious> 1 question, i setup 2 webpages, 1 would make 10k writes to mysql, another 10k to mongodb. i execute the mysql one (via apache) and it keeps on processing for 300s, and then right after that i execute the one for mongo. the mongo one does not even start until the mysql one is completed. what could be the reason?
[05:20:45] <circlicious> as soon as mysql writes complete, the mongo also starts and completes in 0.7s (with safe write)
[07:37:46] <[AD]Turbo> hola
[07:50:55] <circlicious> hi
[07:54:23] <circlicious> i am generating timestamps in JS client side, sending to server and trying to save in mongo. values like 1342425122632 becomes -1899641016 - what should i do ?
[08:06:44] <circlicious> really? is this channel so quiet?
[08:09:30] <ron> depends on the time of the day and the day of the week.
[08:10:55] <circlicious> tell me more
[08:13:39] <circlicious> seems like it only happens when i insert from PHP :/
[08:14:05] <circlicious> from mongo shell it seems to insert fine
[08:19:02] <kali> circlicious: you might be hitting this: http://derickrethans.nl/64bit-ints-in-mongodb.html
[08:19:10] <kali> circlicious: note that i don't use the php driver at all
[08:26:51] <circlicious> thanks
[08:35:53] <circlicious> should be enabled by default though
[08:40:32] <circlicious> now when i do db.coll.find() all numbers have NumberLong() ugh
[08:41:07] <heywuut> sooo... I guess you get this question all the time.. but...
[08:41:14] <heywuut> I need help choosing a shard key ;)
[08:41:48] <DigitalKiwi> pick the blue one
[08:41:59] <heywuut> shar key: 'blue'
[08:42:01] <heywuut> awesome! ;)
[08:42:01] <circlicious> is there some way to get rid of the NumberLong from the fetched resultset in mongo shell?
[08:42:15] <heywuut> scenario: storing twitter streams, some historical data stored, recently written of course more frequently fetched
[08:42:50] <heywuut> ("everything" written once, a bunch of access, then (slowly over time) removing stuff that isn't interesting)
[08:44:00] <heywuut> so.. "incoming" stuff should be sharded nicely..
[08:44:37] <heywuut> just some random shardkey should work fine I guess.. (perhaps md5 from twitters (ascending) IDs or something)
[08:51:23] <heywuut> (if I reference the _id mostly... I could just do.. _id: x, shardkey: md5(x) ? ;P
[08:51:41] <heywuut> (md5 of objectid as shardkey ;P)
[08:56:01] <heywuut> (seems like there's already plans on doing something like that... I guess I'll add it clientside till then)
[09:59:27] <s0urce> hi
[10:21:02] <s0urce> I find no example where the user/pass is required for connection in node.js, does anyone got one for me?
[10:21:41] <ron> umm, what's that got to do with mongo?
[10:22:54] <s0urce> I am not really sure if mongo needs a user/pass auth at connection time at all. I am totally new to mongo. Maybe i am on wrong way.
[10:23:39] <ron> okay, let's try clarifying a few things.
[10:23:41] <algernon> http://www.mongodb.org/display/DOCS/Security+and+Authentication
[10:23:46] <algernon> clarified!
[10:23:47] <algernon> ;)
[10:23:53] <ron> not really.
[10:24:59] <ron> s0urce: mongodb, as a database, as authentication settings if you want to set them up. by default, they're off. if you want to set up your application's authentication to be based off of credentials stored in mongodb, that's a different issue.
[10:26:55] <s0urce> i added a user for my database like told in example "use projectx db.addUser("joe", "passwordForJoe")" and i can see my data, my collection and user in "MongoVUE", but i don't find any way to add my user/pass while connection in my project code
[10:28:18] <ron> that depends on which client you use, I guess. not really familiar with node.js.
[10:28:38] <s0urce> Is mongo still in beta, or how can i use a database without auth for a productive page?
[10:28:52] <s0urce> off by default?
[10:29:07] <ron> off be default, yeah.
[10:29:31] <ron> and I'm not familiar with many products that go with version 2.x+ and still call themselves beta.
[10:29:35] <s0urce> so everyone can connect to my database by default? the seems very hard to belive :)
[10:29:50] <s0urce> this*
[10:29:52] <algernon> everyone on localhost, yes.
[10:29:55] <s0urce> ah
[10:30:05] <algernon> it assumes a trusted network (see my link above)
[10:30:21] <s0urce> but i connected via my windows client to it "MongoVUE"
[10:30:27] <s0urce> this only works if i added a user?
[10:54:27] <circlicious> there's no JOINs in mongo ,right?
[10:59:05] <Derick> right
[11:05:39] <circlicious> no autoincrement IDs, either , right?
[11:05:56] <kali> circlicious: right
[11:06:42] <ron> it's almost as if it's not a relational database.
[11:06:51] <circlicious> ;D
[11:07:10] <NodeX> strange that
[11:07:24] <circlicious> my data will be broke into 2 parts, 20% in mysql, 80% in mongo. is it a good idea if i use mysql's auto increment id and save in my mongo collection?
[11:07:44] <NodeX> if you need to reference the ID then I would say so :P
[11:07:57] <circlicious> ye then i can just use that ID to fetch data from mongo u know
[11:08:22] <circlicious> what do you guys do to achieve "ID"s in general in mongo?
[11:08:27] <circlicious> generate some random tokens and insert?
[11:09:03] <NodeX> mongo has an ObjectId)(
[11:09:24] <NodeX> its a 24bit timestamp "based" hash looking thing
[11:09:43] <circlicious> ObjectId("5003d37aaca51b6f19002ec6")
[11:09:49] <circlicious> i wonder how usable, you guys use it ever?
[11:10:01] <NodeX> http://www.mongodb.org/display/DOCS/Object+IDs
[11:10:08] <NodeX> I use it for everything
[11:10:45] <NodeX> sorry 12byte not 24bit
[11:12:12] <circlicious> hm, so maybe insert in mongo, get the _id, and store in mysql
[11:12:36] <circlicious> or the other way round, get autoincrement id from mysql and store in mongo
[11:12:46] <Derick> doesn't your data already have a unique key?
[11:13:05] <circlicious> which data exactly? from mysql? yes it has an autoincrement id (primary key)
[11:13:21] <circlicious> and then i store some "related" data in mongodb
[11:22:03] <circlicious> i just hope things go fine :P
[11:23:26] <NodeX> remove SQL out of the equation and I am sure they will lol
[11:23:52] <circlicious> I'll try
[11:24:42] <circlicious> so basically in orer to achieve relation, you store in a collection ,take the ObjectID, store in another collection with other data. then you use the ObjectID to fetch data from other collection. So basically 1 query per collection read, right ?
[11:24:46] <circlicious> *order
[11:25:18] <heywuut> confoocious: also note, you can create a new ObjectId() right away, and use it in your insert (so you don't have to look it up later)
[11:25:19] <NodeX> you can embedd instead if it makes more sense to
[11:25:48] <circlicious> embed means nested object?
[11:26:00] <circlicious> {foo: 'bar', baz: {.. embedded datra ..}} ?
[11:26:03] <NodeX> yep
[11:26:27] <circlicious> ya i did that just now in 1 use case, did'nt want 2 collections :P
[11:27:06] <circlicious> heywuut: um, how is that done? like if i am doing from PHP
[11:28:21] <NodeX> $id=new MongoId();
[11:29:00] <NodeX> $item=array('_id'=>$id.....); ... insert($item); ... return (string)$id; <--- your object id as a string
[11:29:16] <heywuut> NodeX: he disconnected ;P
[11:29:18] <circlicious> oh cool
[11:29:25] <heywuut> oh.. or? wait? what? ;P
[11:29:29] <heywuut> nevermind xD
[11:29:29] <circlicious> man
[11:29:32] <NodeX> now I'm confused!
[11:29:39] <heywuut> I'm too tired. he didn't ;)
[11:29:39] <circlicious> i wish php mysql's API was as cool as thew Mongo api
[11:29:41] <heywuut> -
[11:29:42] <NodeX> or should I say confooosed!
[11:29:42] <heywuut> $data = array();
[11:29:43] <heywuut> $data['_id'] = new MongoId(); // <-- this generates a new unique ID, which you can use right away
[11:29:43] <heywuut> $data['foo'] = 'bar';
[11:29:43] <heywuut> $mongo->something->insert($data);
[11:29:43] <heywuut> -
[11:29:47] <heywuut> NodeX: :D
[11:30:21] <circlicious> do you guys use any wrapper over Mongo in PHP or just use it straightaway?
[11:30:23] <NodeX> circlicious : always remember with Mongo .. cast EVERYTHING!
[11:30:33] <NodeX> I use a wrapper
[11:30:51] <circlicious> cast everything? example?
[11:30:55] <circlicious> link to wrapper?
[11:31:00] <NodeX> my own
[11:31:05] <circlicious> ok
[11:31:08] <heywuut> host it and link it!
[11:31:37] <heywuut> btw..
[11:31:41] <NodeX> and .. $integer=(int)2;... $string=(string)'foo'; .. $arr=(array)array('foo'); <--- not really needed
[11:31:53] <heywuut> anyone familiar (semi-okay at?) sharding+replica sets? ;P
[11:32:43] <circlicious> tell me something, I had to do ini_set('mongo.native_long', 1); to make it sabve 64bit ints, but now db.coll.find() ends up showing NumberLong(".....")
[11:32:51] <circlicious> not clean you know, if i have to examine something
[11:33:05] <circlicious> can i get rid of that extra texts from mongo shell somehow?
[11:33:25] <Derick> circlicious: javascript doesn't support 64bit ints at all
[11:33:29] <Derick> so it can't show them
[11:33:37] <NodeX> ^^ that's part of the reason I wrote my own wrapper
[11:33:38] <Derick> if you use native long in the PHP driver, than that will work
[11:33:57] <NodeX> my wrapper casts 64 bit ints as a string to store them and re-casts them on the way out to int's
[11:34:09] <Derick> NodeX: that's not good for an index or ordering at all
[11:34:26] <NodeX> I never index or order on them so it doesnt matter
[11:34:30] <circlicious> actually, i am sending timestamp from JS to PHP and need to save thaty in mongo as it is
[11:34:40] <Derick> fair enough, just don't suggest it as a generic solution then NodeX ;-)
[11:34:43] <NodeX> and I rarely use them
[11:34:56] <NodeX> at what point did I suggest anything as a generic solution
[11:34:56] <NodeX> ?
[11:35:03] <NodeX> I said "what I did"
[11:35:05] <Derick> I am just saying that you shouldn't :P
[11:35:16] <circlicious> so the timestamp becomes -1842151... , do i make sense Derick ?
[11:35:33] <NodeX> but please enlighten how indexing a string is different from indexing the same int?
[11:35:42] <ron> NodeX: you know there's a simpler solution to all that ;)
[11:36:00] <NodeX> ron : let me guess ... dont use php ? :P
[11:36:53] <kali> NodeX: strings are also much less efficient space-wise
[11:37:24] <NodeX> less efficeint that storing "NumberLong("...")" with the integer?
[11:37:36] <kali> NodeX: mongo does not store NumberLong
[11:37:37] <circlicious> are you sure JS does not support 64bit ints?
[11:37:40] <kali> NodeX: it stores the int
[11:37:55] <kali> NodeX: it's the javascript driver that set the wrapper
[11:37:55] <NodeX> the php driver does with those settings kali
[11:37:56] <circlicious> I mean i can do this, `var a = 18446744073709551610` right?
[11:38:07] <Derick> NodeX: takes up more space in the index, and "12" will sort before "2"
[11:38:13] <kali> NodeX: i very much doubt it.
[11:38:21] <NodeX> kali : ok ;)
[11:38:25] <NodeX> thanks Derick
[11:38:55] <Derick> NodeX: the php driver doesn't set the NumberLong wrapper
[11:39:13] <NodeX> 12:33:04] <circlicious> tell me something, I had to do ini_set('mongo.native_long', 1); to make it sabve 64bit ints, but now db.coll.find() ends up showing NumberLong(".....")
[11:39:15] <NodeX> really?
[11:39:19] <Derick> really
[11:39:26] <Derick> the javascript shell does that on output
[11:39:37] <circlicious> i dont want it to output thgat
[11:39:46] <circlicious> less clarity, cant examine things
[11:40:01] <Derick> circlicious: then you need to store 32bit ints, and use MongoInt32("yournumber") from the PHP side
[11:40:03] <circlicious> as it makes ALL ints padd with that
[11:40:18] <circlicious> I am just taking value from client-side (ajax) and saving in Mongo
[11:40:22] <circlicious> nothing in between you know
[11:40:48] <Derick> mongo doesn't speak javascript directly...
[11:40:57] <Derick> so how do you add things to mongo?
[11:41:14] <heywuut> okay.. scenario: I have 5 servers, and intend to run 1 app-server, and 4 DB-servers (two shards + replicas of them).. http://i.imgur.com/Aq1xM.png
[11:41:30] <heywuut> okay? retarded? wtf? gotchas? ;p
[11:42:07] <heywuut> (bunch of extra there for the config servers and aribters)
[11:42:31] <circlicious> Derick: i get json string from clientside, i json_decode it and send in ->insert()
[11:44:24] <heywuut> yah.
[11:45:42] <circlicious> Derick: var_dump() on what i am inserting is going to dump the array
[11:45:58] <circlicious> also i read a tut and set this above, ini_set('mongo.native_long', 1);
[11:46:03] <Derick> circlicious: yes
[11:46:05] <NodeX> it'll dump php's representation of the array
[11:46:17] <NodeX> i/e tell you what PHP thinks of your vars
[11:46:29] <Derick> with native_long=1 , the driver will *always* store in 64bit ints, if your platform is 64bit (which it is I think)
[11:46:54] <circlicious> ya i set that cuz of the problem i have mentioned
[11:46:55] <ron> NodeX: I can tell you what I think of your PHP.
[11:47:10] <NodeX> tell me
[11:47:22] <ron> NodeX: it's magical.
[11:47:25] <heywuut> Derick: storing a bunch of data to it, with no good "natural sharding".. md5(_id) as shardkey?
[11:47:27] <ron> And now, enough about that :)
[11:47:31] <NodeX> haha
[11:47:35] <Derick> heywuut: sometimes, depends on your data ;-)
[11:47:36] <NodeX> not what I was expecting :P
[11:47:45] <heywuut> Derick: tweets! ;P
[11:47:55] <ron> NodeX: not all magic is good. especially when it does unexpected crappy things ;)
[11:48:02] <Derick> heywuut: why not twitter handle then?
[11:48:24] <NodeX> ron : then where would the fun be :P
[11:48:25] <heywuut> for the..
[11:48:27] <heywuut> hm..
[11:48:38] <heywuut> :O
[11:48:43] <ron> NodeX: k, k, nuff trolling ;)
[11:48:56] <NodeX> :P
[11:48:57] <heywuut> Derick: of course.. that should work out fine I assume!
[11:49:03] <diegok> kchodorow: hello. have you looked around to implement secondary reads when slave_okay() on the perl driver?
[11:49:24] <ron> NodeX: you'd be surprised, but I believe that even crappy languages should be supported as long as they're widespread enough.
[11:49:34] <heywuut> Derick: (write to one shard for X MBs, then split that based on the various screen_names seen in the twitter streams)
[11:49:43] <heywuut> Derick: (automatically ofc)
[11:49:43] <NodeX> ron : does that include java?
[11:49:49] <NodeX> lmao :P
[11:50:03] <ron> NodeX: No. Java doesn't need support. it just works ;)
[11:50:03] <Derick> heywuut: mongos will handle that for you
[11:50:12] <diegok> kchodorow: I think get_database should return a "delayed" db and resolv on run_command()
[11:50:13] <circlicious> Derick: you're not getting me, but ok, lets start again - http://pastie.org/private/savfcpg7rjplnsvnsoj7iw
[11:50:16] <NodeX> except when it doesn't of course lol
[11:50:30] <circlicious> now when i save that, it saves -185... for the [ca]
[11:50:41] <Derick> circlicious: yes
[11:50:44] <circlicious> so i had to set native_long to 1. and then it saves fine but
[11:50:54] <circlicious> in mongoshell i get all my ints padded with NumLong
[11:51:01] <circlicious> NunberLong
[11:51:05] <Derick> circlicious: it's garbage, have you read: http://derickrethans.nl/64bit-ints-in-mongodb.html ?
[11:51:14] <Derick> it's a BC thing...
[11:51:15] <circlicious> its the same if i use MongoInt64
[11:51:19] <Derick> the NumberLong is not a problem
[11:51:22] <Derick> ignore it
[11:51:23] <circlicious> yes i read that, and thats why i did whT I HAVE DONE
[11:51:32] <heywuut> Derick: I know! I've been staring myself blind at the various things to use as a shardkey.... (twitter IDs are ascending numbers so not too awesome)
[11:51:36] <circlicious> sorry for caps lock
[11:51:43] <Derick> it's just the shell's way of representing a 64bit int - you can sort and query normally against it
[11:51:47] <circlicious> Derick: oh, you wrote that post?
[11:51:54] <Derick> circlicious: yes :-)
[11:52:05] <Derick> (and the code to go with it)
[11:52:05] <circlicious> Derick: i know, but i need to examing some things, and it makes it hard to read. so i wanna get rid of that. or maybe there is a nice mongodb client ? :/
[11:52:07] <heywuut> Derick: but.. still new to mongo, so... not used to thinking of "such things" (names, etc) as shardkeys...
[11:52:16] <circlicious> ya so you wrote php-mongo? cool
[11:52:22] <Derick> circlicious: well, a little bit of it
[11:52:26] <circlicious> did a good job, i wish php-mysql php-pdo were like it
[11:52:44] <circlicious> anyway
[11:52:44] <Derick> kchodorow wrote most of it, with bjori and me taking over now
[11:53:18] <circlicious> so anyway, cant get rid of it baically :P
[11:53:21] <circlicious> basically
[11:53:41] <Derick> the only way is to wrap your numbers in MongoInt32() when storing
[11:53:58] <circlicious> ye not cool
[11:54:01] <circlicious> btw, what do you think about taking mysql autoincrement id and saving in mongo to make a relation, so that i can fetch with ease, Derick ?
[11:54:08] <Derick> but IMO, we need to change the JavaScript shell to not show that
[11:54:16] <Derick> circlicious: that should work
[11:54:21] <circlicious> ok cool
[11:54:25] <circlicious> and yeh change the JS shell
[11:54:28] <circlicious> maybe have a settings
[11:54:34] <circlicious> see its my first day using mongo :P
[11:54:52] <Derick> changing the shell is not something I can do :P
[11:54:57] <Derick> file a feature request if it's not there yet
[11:55:43] <circlicious> well... i'll manage
[11:55:56] <circlicious> ;D
[11:56:26] <heywuut> Derick: thanks for the eye-opener :D
[11:57:02] <heywuut> Derick: and the reaffirming glance on the server layout :) <3
[11:58:01] <heywuut> (I've been reading about using usernames as shardkey "everywhere", but.. I just read it as "this is a placeholder shardkey in our simple example" ;P)
[11:58:19] <heywuut> but, it definately makes sence ;D
[11:58:31] <heywuut> and I kan spel!
[13:10:31] <DinMamma> Hiya. Im in the process of tweaking the read-ahead option for my RAID.. For some reason it was set to 2048, a set to 256 increased query-performance x3!
[13:10:35] <DinMamma> Neat.
[13:11:23] <DinMamma> The documents are 4kb big, there is one index called product_hash, there are roughly between 20-80 documents that share the same product_hash. And all in all 75 million documents in the collection.
[13:12:04] <DinMamma> My question is, should I have a read-ahead thats aroud 4kb or try to have it 4kg*~60 (for avarage 60 documents per product_hash) for best performance?
[13:47:43] <ahri> hi guys, can someone explain why my 2nd query returns nothing? https://gist.github.com/3122813
[13:49:25] <heywuut> ahri: because there is no such path? ;p
[13:52:44] <ahri> ugh. yes, i just noticed that, i must've run a test in the background that swapped the data out
[13:52:57] <ahri> silly me :(
[13:56:46] <heywuut> hehe :)
[14:25:36] <mw44118> Hi -- I'm running a query from a cron job every minute to check my mongo db for recent changes. the query looks like this: db.oplog.rs.find({'ts.t': {$gte : 1342199800}}). The idea is to get recent changes. Is this the right way to get recent changes? Is there some index I can add so that this query runs more quickly?
[14:28:11] <mediocretes> mw44118: I've never done this myself, but take a look at this: http://www.mongodb.org/display/DOCS/Tailable+Cursors
[14:45:50] <Bartzy> I don't understand the use of findAndModify
[14:45:55] <Bartzy> Why is it atomic ?
[14:46:16] <Bartzy> What's the difference between having the document, updating it, and then you have the document on your client, and the document updated on mongo ?
[14:48:55] <mediocretes> find and modify allows you to issue an update and retrieve the state of the document just before you updated it, and guarantees that no one else touched it in between those two things happening.
[14:50:52] <Bartzy> OK. What is it good for ?
[14:51:13] <Bartzy> A second after that someone could touch it... so I got the document and the update was atomic.. so ?
[15:02:11] <mediocretes> Bartzy: let's say you are using mongo to store background worker jobs. you need to get the document once and only once, but have many workers.
[15:02:25] <Bartzy> ok
[15:02:31] <mediocretes> find and modify lets you set the document to {working: true} and retrieve it, to see if anyone else was already working on it
[15:02:53] <mediocretes> any system without atomic find and modify risks multiple dispatches or a job getting skipped
[15:02:55] <Bartzy> "to see if anyone else was already working on it" ? What does that mean ?
[15:03:04] <mediocretes> to see if another worker had already fetched the job
[15:03:13] <Bartzy> multiple dispatches I understand. How a job can get skipped ?
[15:03:29] <Bartzy> How another worker has fetched the job if you query on working: false in find and modify ?
[15:03:57] <mediocretes> what if two workers query for working: false at exactly the same time, and then both update it to working: true
[15:04:09] <mediocretes> without FAM, both will work the job. with FAM, only one will.
[15:05:07] <mediocretes> it's the kind of thing you don't need until you do, and then you REALLY do.
[15:05:09] <mediocretes> lunchtime
[15:06:13] <Bartzy> how come only one will? It first finds the first working: false... so both are getting that... or the first one to fetch that is locking and then the 2nd one is looking if there is a lock ?
[15:06:20] <Bartzy> mediocretes: ^
[15:07:31] <mediocretes> find and modify guarantees that no one else has touched the document between your find and your update, so only one will get it
[15:07:36] <mediocretes> I'll be back later :)
[15:46:39] <mw44118> need help constructing an index so that a query runs more quickly
[15:47:22] <mw44118> I have a collection of people, and each person document has a field "work" that is a list of embedded documents. Each embedded document has a reference field pointing to another document "company"
[15:47:43] <mw44118> so, the query I am running is "give me all the current employees of company X"
[15:48:26] <mw44118> this goes really slow, because mongo scans all 2 million people with a basic cursor.
[15:48:50] <mw44118> How do I build an index so that queries for a field in a list of embedded docs are fast?
[15:51:52] <NodeX> index the embedded doc
[15:52:18] <NodeX> db.foo.ensureIndex({'foo.bar':1});
[15:53:52] <mw44118> it is a reference field, so does that matter in how i ensure the index?
[15:54:12] <mw44118> foo.bar points to a document in the Bar collection
[16:01:12] <NodeX> no
[16:01:26] <NodeX> foo.bar points to {foo : {bar : "value" }}
[16:25:22] <Bartzy> Mongo flushes to disk only every 60 seconds ?
[16:39:26] <Bartzy> Why does it make sense to have both indexes for a and for a-b if b is a multikey field ?
[17:08:45] <mw44118> what does it mean when do a query like "db.collectionName.getIndexes()" and mongo just hangs?
[17:10:04] <mw44118> I imagine doing something like asking for the indexes on a collection ought to be really fast, so maybe the mongo box is overwhelmed
[17:10:07] <mw44118> is that right?
[17:49:37] <rockets> I just had an unclean mongo shutdown. Can i restore from backup instead of doing a repair, to get it to start again?
[17:54:30] <rockets> also, why does an unclean shutdown also crash my other mongo nodes in the cluster?
[17:54:38] <rockets> doesnt that defeat the purpose of having a cluster?
[18:20:27] <chubz> How can I test if my failover is working correctly? Is there a way I can make my primary node fail?
[18:24:25] <diegok> chubz: rs.stepDown(10)
[18:24:49] <diegok> chubz: 10 is how many secconds it will step down
[18:25:15] <chubz> diegok: thanks that exactly what I was looking for
[18:25:42] <diegok> ;)
[18:56:04] <falu_> I have a long running for-loop with a lot of pymongo calls, and I see how available connections drop from 817 to 0 (error: too many connections). Is it correct that the connections are kept because the GC does not kill them quickly enough?
[19:20:37] <jstout24> does anyone do pre aggregations with daily, hourly, and minutely data in one document?.. if so, how are you querying a range by the minute
[19:20:48] <jstout24> trying to see if anyone has developed a clean way to do it
[19:43:36] <alonhorev_> hi all. assuming i have a large cluster and someone runs a very long find() that runs on all shards. how can i kill the query from mongos (and propagate the kill to the mongods)? can i disallow queries that run on all shards?
[20:00:09] <jmorris> does anyone know why mongoose would be storing ObjectId("500472bbc5f1c8ee3a000014") in the _id field?
[20:16:13] <owen1> i try to insert json key with . as port of a key and i get an exception - key turkey %0.5 must not contain '.' why can't i have .?
[20:16:20] <owen1> (dot)
[20:40:31] <algernon> owen1: because . is used for dotted notation, to reach into objects
[20:41:32] <chubz> After a failover occurs and a new primary is chosen, is the former primary node being attempted to be fixed? Or does one have to manually restart the node and fix it?
[21:15:38] <ahri_> hi, how would i get a list of path strings out of this data structure? https://gist.github.com/3125098
[21:16:14] <ahri_> i'm still thinking in relations :\
[22:21:50] <chubz> Any idea why I can connect "mongo localhost:27017" but not ports 27018 or 27019? I started mongod processes under those ports but its not working. how can i check whether or not those processes are really being used by those ports? (i'm onlinux)
[22:47:18] <owen1> algernon: but it's a valid in js and json, i think.