[03:05:52] <brandon-zacharie> I'm using mongodb (2.4.8_1) and Node.js (0.10.22) from MacPorts with the official Node.js mongodb driver (1.3.20) from NPM (1.3.15)
[03:06:09] <brandon-zacharie> this is freaking me out
[03:06:21] <ranman> and you can run operations fine through the shell?
[03:06:29] <brandon-zacharie> i've even tried rebuilding everything
[03:06:47] <ranman> if you startup the node shell and run that snippet you get no output besides connect ?
[03:06:48] <brandon-zacharie> terminal ? node shell? or ?
[13:11:18] <ts33kr_> Hey everyone. Where can I find some sort of guidelines on building effective cluster topology? I need numbers like: recommended max number of connections per node (or per computational core), and how much memory is consumed by one connection, etc.
[13:23:49] <liquid-silence> Hi guys, a schema design question
[13:28:56] <cheeser> i would store the users' group IDs as part of the users.
[13:29:49] <liquid-silence> \yeah the groups is an array for a user
[13:37:28] <Nodex> liquid-silence : it really depends on your access patterns and what you want to do with the data
[13:38:02] <liquid-silence> nah my schema will work this way
[13:47:50] <Nodex> only you can answer that as only you know what you're trying to achieve with it
[14:01:56] <synesp> I'm having a small issue.. it seems about a few times a day, mongo stops reading new docs out of a collection of mine. new docs are definitely being written, but I have a process that "monitors" a collection (grabs the last three records using sort and limit) every 2 seconds and sometimes this process just stops reading the last few docas
[16:13:47] <ron> you're asking whether it would be useful for your application to have a caching layer?
[16:14:08] <asker> yap caching layer for mongo, like 1 call vs 1 cache call?
[16:14:15] <asker> mongo seems to be pretty fast alrdy
[16:14:45] <ron> do you.... have any performance issues at the moment?
[16:16:21] <asker> yap my request takes ~2s, never cached stuff so, is it worth to cache the mongodb calls or should i cache more big section into 1 cache key?
[16:17:20] <ron> well, that's a complicated question. caching has its costs too. have you tried figuring out what's taking it 2s?
[16:18:23] <asker> ofc the mongo queries, but memcache seems to be as fast as mongo to call
[16:23:55] <asker> well, i benchmarked wrong: memcache 20-30 times faster then mongodb in my case
[16:52:13] <ambushsabre> hey guys, I have a quick question about an update I'm trying to run with the ruby driver. It performs the update perfectly, but still throws an error about type conversion. It's weird, because it isn't crashing the server or anything, just throwing the error and then moving on
[18:49:46] <meagain> Hello, can anyone please advise me on the best way to import or convert a .sql dump file to json
[18:50:41] <kali> meagain: it's often easier to export the data in json or csv from your sql engine
[18:53:13] <meagain> Oh okay, but for this particular case, I was just sent the .sql dump file. have little knowledge about the contents except I know it contains four tables, and I have the sql queries for two of them which I am supposed to be using
[18:55:55] <kali> well, you have two possible road: load the data back into the sql engine (safe, easy, but potentially slow), or parse the dump yourself (certainly faster to run, but much messier and much longer to code / debug / test)
[18:57:07] <meagain> Okay thanks, Number one wont be possible at this time, would have to do number 2. can you kindly if possible tell me how to do this
[18:58:18] <kali> open your text editor or ide, and code.
[19:19:17] <darius93> question: why use mongodb over mysql, and why is it so hard to understand how it query data or obtain it compared to mysql?
[19:24:24] <harttho> darius93: In regards to the second part of your question… because you haven't taken the time to actually learn it
[19:25:09] <darius93> well i have tried but it all looks to complex and confusing because of how everything is stored is so different from mysql
[20:31:19] <astropirate> I want to run two queries and get the UNION of the two queries
[20:33:52] <kali> if it's on the same collection, you can use $or. if it's in two, you can not do it in mongodb, you'll have to perform the two queries and merge them yourself
[20:37:30] <astropirate> kali, how is the performance of using $or vs multiple (2+) separate queries and merging them in application?
[20:37:46] <astropirate> kali, all results being in the same collection
[20:38:45] <astropirate> there are likely to be multiple parameters to find against
[20:41:34] <kali> astropirate: it's usually better as you only pay once the network round trip, and the optimizer should be able to cope with it
[20:42:01] <kali> astropirate: if you've put in the situation where you need a huge number of queries, though, you may have to rething your schema
[20:42:32] <tastycode> There is no way to sort by the object id's getTimestamp() is there?
[20:42:51] <kali> tastycode: just sort by the _id, the timestamp is at the beginning
[20:43:46] <kali> tastycode: note that timestamp is only at the second precision, so different timestamp generated in the same second may not be in order
[20:44:03] <tastycode> that's fine.. only needs to be accurate down to the minute
[20:44:22] <tastycode> Thanks for getting back quickly…. i miss seeing you guys at epicenter!
[20:45:13] <tastycode> (that is if you are part of mongodb's team… they used to have "office hours" at epicenter cafe )