[01:12:29] <Gavilan2> Hi! What's the best ORM for JavaScript? And most "transparent" to the business model? I'll be trying to port it to meteor...
[01:19:07] <bizzle> just asked this really simple question on SO about casbah/scala http://stackoverflow.com/questions/12327269/is-there-a-more-idiomatic-way-to-use-casbah-to-check-a-password
[02:40:18] <ecksit> (just fyi, i have zero mongo experience. i inherited this project and they no longer want to use mongo.)
[02:41:47] <UForgotten> sorry :( too hard for them? ;)
[02:42:49] <futini> which is a best approach for work whit id on mongo, example, the _id for defect is too long, then in a url es hard work whit this
[02:43:02] <ecksit> the dev they had used mongo for an internal app and now he left no one can manage it.
[02:43:09] <futini> is a good way create a another id incremental?
[02:43:25] <ecksit> so, they are getting me to change it back to mysql for their devs
[02:46:46] <timeturner_> what's the best way to store stuff in a subdocument that grows without limit?
[02:47:04] <timeturner_> I want to store them in a subdocument for the purposes of querying speed
[02:47:24] <timeturner_> and then grab the rest of the documents that couldn't fit in the document from another collection individually
[02:55:56] <mrpoundsign> hello everyone. I am trying to figure out how to do an atomic findAndModify that updates a field on all subdocuments to be the same thing. For example:
[02:56:31] <mrpoundsign> I want to set the status to 'ready' and the status on both sub objects to 'ready' in one command. I tried the following:
[02:57:10] <mrpoundsign> db.foo.findAndModify({query: {status: 'waiting'}, update: {$set: {status: 'ready', "thingies.status": 'ready'}}}) which I suspected wouldn't work. Do I need to iterate over every sub object int he array and specify them all?
[04:35:42] <phatduckk> hey guys - anyone here have experience identifying causes on high lock % ?
[04:46:00] <mrpoundsign> phatduckk: what do you mean?
[04:52:22] <phatduckk> mrpoundsign: i see a bunch of slow inserts, updates due to lock waits
[04:52:43] <phatduckk> i also see my lock% in mongostat spike high (almost 200%) from time to time
[04:53:29] <phatduckk> trying to track down what my code is doing that mongo doesnt like - or find out if I need beefier hardware
[04:54:16] <mrpoundsign> well more hardware always helps. haha
[05:08:49] <phatduckk> lemme get iozone installed real quick
[05:08:56] <mrpoundsign> also do you know if the filer is on the same switch as the host?
[05:09:36] <mrpoundsign> might be going through a 100mb hub somewhere. haha
[05:09:45] <mrpoundsign> have had those kinds of issues in the past.
[05:10:43] <mrpoundsign> Again, I am no expert, but I would personally go for a more distributed mongo cluster over more, smaller hardware with local storage. I think remotely mounted storage is not recommended. At least make sure you're using iscsi, if you can.
[05:36:31] <mrpoundsign> is that much higher than you saw when mongodb is churning? if it's not much higher, it means, for whatever reason (CPU, network, mount options), you're not able to get more performance out of the filer from that server.
[05:37:30] <mrpoundsign> if it's significantly (hundreds of %) higher, then it could be something else (but you can probably still get better performance with things like mounting with noatime, or other tweaking.
[05:38:36] <phatduckk> mongo doesnt seems to crop over 25% io
[05:40:14] <mrpoundsign> looks like it's around the 25% mark, which is what I saw on the mongo graphs as well. First thing I would try is mounting noatime, see if that helps and how much. That's basically a locked write every time you access the file (read and/or write). It's often turned off for remote mounts.
[05:44:14] <mrpoundsign> mexlex: but that's subjective. I like it. *shrug
[05:44:24] <mrpoundsign> mexlex: yeah it's good too.
[05:44:31] <mexlex> well in terms of updates and patches and stuff liek that
[05:47:38] <mrpoundsign> mexlex: they're both actively supported. Generally I use ubuntu because I am not afraid of newer versions of software. CentOS tends to lag. But again, it's subjective -- what you know best you'll do the best with.
[05:48:45] <mrpoundsign> my current employer loves CentOS so we use that for all our stuff. But I laugh every time they spend 2 days getting something that is an apt-get away in ubuntu. Again, my statements are my opinion, and I could be wrong ( before I get yelled at :P )
[06:20:09] <phatduckk> Sat Sep 8 06:18:58 [conn105] problem detected during query over config.system.profile : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }
[06:25:03] <phatduckk> apps super fast with 1 server dead
[15:32:01] <Antaranian> ron: I have a property of nested doc, which is known. now i want to find all docs containing embedded doc with that value of attribute
[16:06:44] <fg3> need help with this question: http://pastebin.com/uwG9kNjq
[16:10:09] <fg3> oops posted that last question to wrong channel
[16:17:22] <ron> fg3: sorry, not familiar with the language in the sample.
[16:20:39] <fg3> ron, my fault posted to wrong channel
[16:23:19] <fg3> ron, correct me if I'm wrong -- it's not possible to edit documents if they have nested arrays of 2 or more levels correct -- because the positional operator cannot handle it.
[16:27:19] <ron> fg3: not sure, honestly. embedded docs and within arrays and arrays within arrays are limited in mongo.\
[16:33:41] <darklrd> hello, if I am using mongodb to log chat messages for multiple websites and say for each user of a particular website, I allocate a collection to store his messages, then I keep on hitting namespace limit
[16:33:59] <darklrd> how do I solve this problem? Any suggestion?
[16:53:49] <vsmatck> darklrd: I don't know if/how you want to query the data. But have you considered append only files?
[16:54:30] <darklrd> vsmatch, for querying I just need to extract latest say 50 msgs first
[16:54:48] <darklrd> vsmatch, and then later on keeping on repeating this procedure
[16:55:12] <darklrd> vsmatch, what do you mean by append only files?
[16:56:24] <darklrd> vsmatch, are you suggesting to use system files?
[16:56:50] <vsmatck> Yeah. Appending to files on the filesystem. But it doesn't sound like that fully meets your needs.
[16:57:33] <vsmatck> Redis would be perfect for keeping the last 50 messages in memory. Then you could keep a total history in append only files, or mongo.
[16:58:17] <darklrd> vsmatch, yes I can use append only files, I am using redis at the moment for recent messages, but I was looking forward to use mongo somehow
[17:01:08] <darklrd> vsmatch, mongo will enable me to perform some basic query operations and allow me to set up multiple servers :)
[17:01:58] <darklrd> vsmatch, I thought mongo would be best for this kind of scenario
[17:02:22] <vak> for my directory per db storage I am getting something new for no clear reason: [initandlisten] exception in initAndListen: 14043 clear tmp files caught exception exception: boost::filesystem::is_directory: Permission denied: "/var/brain-storage/mongodb-storage-pdb/mybd"
[17:03:13] <vsmatck> I'm not sure how well it would work but you could put all chat logs in one collection. Then build a secondary index on username and post timestamp. A descending index.
[17:03:34] <vsmatck> You could shard on username (I think) *reads*.
[17:04:16] <ron> vsmatck: for a minute there I thought this is #redis ;)
[17:12:42] <darklrd> ron, if I am using mongodb to log chat messages for multiple websites and say for each user of a particular website, I allocate a collection to store his messages, then I keep on hitting namespace limit
[17:13:07] <ron> you keep a collection per user? o_O
[17:20:16] <vsmatck> Oh, sounds like this is a lot of traffic too. You may want to divide websites in to different databases. Mongo has a write lock on the database level in the latest version.
[17:20:53] <darklrd> I see, yes I was planning to use different db per website
[17:20:54] <vsmatck> Unless the users for all websites are the same. In which case it seems like they'd need to be together *shrugs*.
[17:26:07] <darklrd> what beats me is the fact that if I store all messages in a single collection, and they are already increasing at an alarming rate, won't it become problem later, or would sharding take care of this?
[17:27:53] <vsmatck> Partitioning data is the only way to increase write performance. Sharding accomplishes that.
[17:34:26] <vsmatck> For example if you want any type of decent performance your indexes should be in memory. Disks are so huge now relative to main memory.
[17:34:53] <darklrd> Yes, indexes should be in RAM
[17:35:16] <vsmatck> I was watching a talk by jeremy zawodny over at craigslist. He was talking about how they don't fill up their disks because they get to a point where their indexes start falling out of memory.
[17:35:42] <darklrd> so, that becomes the deciding factor then, I will search on net for that video, thank you again :)
[17:35:54] <vsmatck> I know it's on the 10gen website somewhere.
[17:36:17] <darklrd> sweet, now I am definitely heading in right direction :)
[18:23:00] <taf2> hey, i'm trying to use addToSet with timestamps and it's not working…
[18:23:19] <taf2> these are all the same: "tsv" : [ ISODate("2012-09-08T18:20:00.842Z"), ISODate("2012-09-08T18:20:00.985Z"), ISODate("2012-09-08T18:20:00.583Z") ]
[18:30:18] <taf2> oh.. question, so i have a sharded mongodb… on a collection visitors… using the default _id field… if i changed that to something a bit more meaningful to my domain… what do i need to consider in terms of the mongodb sharding?