[04:55:07] <circlicious> was thinking of using mongodb for my app, stil skeptical. googled, read some posts against mongo, some say rdbms is best and what not. :/
[04:56:10] <circlicious> i did a quick test here actually, 10k writes in mysql took 300 seconds, in mongo was 0.1sec
[04:56:30] <circlicious> i think mongo will suit well in my use case, but quite confusd.
[04:57:54] <circlicious> some say UPDATEs are harsh with mongo, but i think i just need to read/insert
[04:59:13] <circlicious> reasons i want to go with mongodb? my schema might change frequently, and i think i am going to get a lot of writes. while, i can cache the read.
[05:07:12] <circlicious> neil__g: i think i remember reading mongodb does not really writes to disk thats why its so fast. and there is an option to make sure it writes to disk although then i would loose benefits. what is that option ? i really wanna make sure that i dont loose data.
[05:17:36] <circlicious> i added ['safe' => true] , and did that make it slow? for 10k writes the speed jumped from 0.1s to 0.7s
[05:20:05] <circlicious> 1 question, i setup 2 webpages, 1 would make 10k writes to mysql, another 10k to mongodb. i execute the mysql one (via apache) and it keeps on processing for 300s, and then right after that i execute the one for mongo. the mongo one does not even start until the mysql one is completed. what could be the reason?
[05:20:45] <circlicious> as soon as mysql writes complete, the mongo also starts and completes in 0.7s (with safe write)
[07:54:23] <circlicious> i am generating timestamps in JS client side, sending to server and trying to save in mongo. values like 1342425122632 becomes -1899641016 - what should i do ?
[08:06:44] <circlicious> really? is this channel so quiet?
[08:09:30] <ron> depends on the time of the day and the day of the week.
[10:21:02] <s0urce> I find no example where the user/pass is required for connection in node.js, does anyone got one for me?
[10:21:41] <ron> umm, what's that got to do with mongo?
[10:22:54] <s0urce> I am not really sure if mongo needs a user/pass auth at connection time at all. I am totally new to mongo. Maybe i am on wrong way.
[10:23:39] <ron> okay, let's try clarifying a few things.
[10:24:59] <ron> s0urce: mongodb, as a database, as authentication settings if you want to set them up. by default, they're off. if you want to set up your application's authentication to be based off of credentials stored in mongodb, that's a different issue.
[10:26:55] <s0urce> i added a user for my database like told in example "use projectx db.addUser("joe", "passwordForJoe")" and i can see my data, my collection and user in "MongoVUE", but i don't find any way to add my user/pass while connection in my project code
[10:28:18] <ron> that depends on which client you use, I guess. not really familiar with node.js.
[10:28:38] <s0urce> Is mongo still in beta, or how can i use a database without auth for a productive page?
[11:07:24] <circlicious> my data will be broke into 2 parts, 20% in mysql, 80% in mongo. is it a good idea if i use mysql's auto increment id and save in my mongo collection?
[11:07:44] <NodeX> if you need to reference the ID then I would say so :P
[11:07:57] <circlicious> ye then i can just use that ID to fetch data from mongo u know
[11:08:22] <circlicious> what do you guys do to achieve "ID"s in general in mongo?
[11:08:27] <circlicious> generate some random tokens and insert?
[11:24:42] <circlicious> so basically in orer to achieve relation, you store in a collection ,take the ObjectID, store in another collection with other data. then you use the ObjectID to fetch data from other collection. So basically 1 query per collection read, right ?
[11:25:18] <heywuut> confoocious: also note, you can create a new ObjectId() right away, and use it in your insert (so you don't have to look it up later)
[11:25:19] <NodeX> you can embedd instead if it makes more sense to
[11:25:48] <circlicious> embed means nested object?
[11:32:43] <circlicious> tell me something, I had to do ini_set('mongo.native_long', 1); to make it sabve 64bit ints, but now db.coll.find() ends up showing NumberLong(".....")
[11:32:51] <circlicious> not clean you know, if i have to examine something
[11:33:05] <circlicious> can i get rid of that extra texts from mongo shell somehow?
[11:33:25] <Derick> circlicious: javascript doesn't support 64bit ints at all
[11:38:55] <Derick> NodeX: the php driver doesn't set the NumberLong wrapper
[11:39:13] <NodeX> 12:33:04] <circlicious> tell me something, I had to do ini_set('mongo.native_long', 1); to make it sabve 64bit ints, but now db.coll.find() ends up showing NumberLong(".....")
[11:40:57] <Derick> so how do you add things to mongo?
[11:41:14] <heywuut> okay.. scenario: I have 5 servers, and intend to run 1 app-server, and 4 DB-servers (two shards + replicas of them).. http://i.imgur.com/Aq1xM.png
[11:51:23] <circlicious> yes i read that, and thats why i did whT I HAVE DONE
[11:51:32] <heywuut> Derick: I know! I've been staring myself blind at the various things to use as a shardkey.... (twitter IDs are ascending numbers so not too awesome)
[11:52:05] <circlicious> Derick: i know, but i need to examing some things, and it makes it hard to read. so i wanna get rid of that. or maybe there is a nice mongodb client ? :/
[11:52:07] <heywuut> Derick: but.. still new to mongo, so... not used to thinking of "such things" (names, etc) as shardkeys...
[11:52:16] <circlicious> ya so you wrote php-mongo? cool
[11:52:22] <Derick> circlicious: well, a little bit of it
[11:52:26] <circlicious> did a good job, i wish php-mysql php-pdo were like it
[11:54:01] <circlicious> btw, what do you think about taking mysql autoincrement id and saving in mongo to make a relation, so that i can fetch with ease, Derick ?
[11:54:08] <Derick> but IMO, we need to change the JavaScript shell to not show that
[11:56:26] <heywuut> Derick: thanks for the eye-opener :D
[11:57:02] <heywuut> Derick: and the reaffirming glance on the server layout :) <3
[11:58:01] <heywuut> (I've been reading about using usernames as shardkey "everywhere", but.. I just read it as "this is a placeholder shardkey in our simple example" ;P)
[11:58:19] <heywuut> but, it definately makes sence ;D
[13:10:31] <DinMamma> Hiya. Im in the process of tweaking the read-ahead option for my RAID.. For some reason it was set to 2048, a set to 256 increased query-performance x3!
[13:11:23] <DinMamma> The documents are 4kb big, there is one index called product_hash, there are roughly between 20-80 documents that share the same product_hash. And all in all 75 million documents in the collection.
[13:12:04] <DinMamma> My question is, should I have a read-ahead thats aroud 4kb or try to have it 4kg*~60 (for avarage 60 documents per product_hash) for best performance?
[13:47:43] <ahri> hi guys, can someone explain why my 2nd query returns nothing? https://gist.github.com/3122813
[13:49:25] <heywuut> ahri: because there is no such path? ;p
[13:52:44] <ahri> ugh. yes, i just noticed that, i must've run a test in the background that swapped the data out
[14:25:36] <mw44118> Hi -- I'm running a query from a cron job every minute to check my mongo db for recent changes. the query looks like this: db.oplog.rs.find({'ts.t': {$gte : 1342199800}}). The idea is to get recent changes. Is this the right way to get recent changes? Is there some index I can add so that this query runs more quickly?
[14:28:11] <mediocretes> mw44118: I've never done this myself, but take a look at this: http://www.mongodb.org/display/DOCS/Tailable+Cursors
[14:45:50] <Bartzy> I don't understand the use of findAndModify
[14:46:16] <Bartzy> What's the difference between having the document, updating it, and then you have the document on your client, and the document updated on mongo ?
[14:48:55] <mediocretes> find and modify allows you to issue an update and retrieve the state of the document just before you updated it, and guarantees that no one else touched it in between those two things happening.
[14:51:13] <Bartzy> A second after that someone could touch it... so I got the document and the update was atomic.. so ?
[15:02:11] <mediocretes> Bartzy: let's say you are using mongo to store background worker jobs. you need to get the document once and only once, but have many workers.
[15:06:13] <Bartzy> how come only one will? It first finds the first working: false... so both are getting that... or the first one to fetch that is locking and then the 2nd one is looking if there is a lock ?
[15:07:31] <mediocretes> find and modify guarantees that no one else has touched the document between your find and your update, so only one will get it
[15:46:39] <mw44118> need help constructing an index so that a query runs more quickly
[15:47:22] <mw44118> I have a collection of people, and each person document has a field "work" that is a list of embedded documents. Each embedded document has a reference field pointing to another document "company"
[15:47:43] <mw44118> so, the query I am running is "give me all the current employees of company X"
[15:48:26] <mw44118> this goes really slow, because mongo scans all 2 million people with a basic cursor.
[15:48:50] <mw44118> How do I build an index so that queries for a field in a list of embedded docs are fast?
[16:25:22] <Bartzy> Mongo flushes to disk only every 60 seconds ?
[16:39:26] <Bartzy> Why does it make sense to have both indexes for a and for a-b if b is a multikey field ?
[17:08:45] <mw44118> what does it mean when do a query like "db.collectionName.getIndexes()" and mongo just hangs?
[17:10:04] <mw44118> I imagine doing something like asking for the indexes on a collection ought to be really fast, so maybe the mongo box is overwhelmed
[18:56:04] <falu_> I have a long running for-loop with a lot of pymongo calls, and I see how available connections drop from 817 to 0 (error: too many connections). Is it correct that the connections are kept because the GC does not kill them quickly enough?
[19:20:37] <jstout24> does anyone do pre aggregations with daily, hourly, and minutely data in one document?.. if so, how are you querying a range by the minute
[19:20:48] <jstout24> trying to see if anyone has developed a clean way to do it
[19:43:36] <alonhorev_> hi all. assuming i have a large cluster and someone runs a very long find() that runs on all shards. how can i kill the query from mongos (and propagate the kill to the mongods)? can i disallow queries that run on all shards?
[20:00:09] <jmorris> does anyone know why mongoose would be storing ObjectId("500472bbc5f1c8ee3a000014") in the _id field?
[20:16:13] <owen1> i try to insert json key with . as port of a key and i get an exception - key turkey %0.5 must not contain '.' why can't i have .?
[20:40:31] <algernon> owen1: because . is used for dotted notation, to reach into objects
[20:41:32] <chubz> After a failover occurs and a new primary is chosen, is the former primary node being attempted to be fixed? Or does one have to manually restart the node and fix it?
[21:15:38] <ahri_> hi, how would i get a list of path strings out of this data structure? https://gist.github.com/3125098
[21:16:14] <ahri_> i'm still thinking in relations :\
[22:21:50] <chubz> Any idea why I can connect "mongo localhost:27017" but not ports 27018 or 27019? I started mongod processes under those ports but its not working. how can i check whether or not those processes are really being used by those ports? (i'm onlinux)
[22:47:18] <owen1> algernon: but it's a valid in js and json, i think.