PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 30th of November, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:15:38] <zanes> What does the “v” field in the oplog mean?
[11:24:45] <styles> hey guys using aggregate functions, can I get let's say the time now and in the last 12 hours all the stats counted (I can do this already) but if here are some hours missing stats (like nothing happened) it returns like 10-12-13-14 and 11 is missing
[11:24:50] <styles> is there a way to have mongo put 0
[11:25:11] <styles> http://privatepaste.com/6f55b37366
[11:25:13] <styles> is my current query
[11:43:30] <layke> I only want to store to fields.. id, JSON file.
[11:43:35] <layke> Is mongodb a solution for me?
[11:44:02] <layke> I'm happy turning my complex model isn't a JSON document and just save that.
[11:44:08] <layke> I'll then just access it by the ID??
[11:53:37] <kali> layke: you're describing a key value store. mongodb can do that, but it does much more, so it may be less convenient/efficient than a pure key/value store
[11:56:51] <layke> To be honest, I may as well just continue to use mysql for it
[11:57:28] <kali> layke: yeah. same remark applies to mysql
[11:57:57] <layke> I will still be replicating my relational data in mysql.... I'll still be storing the results in my cache server. I just will be doing what most people will question and store JSON as a text in a mysql table
[11:58:43] <layke> It's not like I'm starting out with saving it in mysql. I already have the data backing it in mysql, if I find that I can continue down that route, I'll just stop saving my data over the 70 tables I currently use.
[11:59:01] <layke> It's obscene and getting to the point where I question why I decided to use relational database in the first place.
[11:59:42] <kali> well, i guess the question is... do you have (or expect) issues with storing json data in your mysql ?
[12:00:54] <layke> 1) Writes will be so infrequent. I don't expect any transactional issues. Even at scale.. It's for vacation rental properties... so writes are so infrequent.
[12:01:31] <layke> 2) The JSON will be relatively large... Maybe 80kb~ average. Lots of foreign descriptions? No probelms there though.
[12:02:09] <layke> 3) I don't ever need to query a "property" by anything other than the "ownerId" and the "propertyId", both of which I can just create a column for alongside the document.
[12:02:32] <layke> I think mongodb would just be another large hammer added to my archiectecture that I don't really need to worry about
[12:02:59] <layke> And well.. mysql... I don't think I will have any issues in storing JSON objects....
[12:05:10] <kali> imho, try to segregate as much as practical the code dealing with this json cache, keep using mysql for now
[12:05:27] <layke> Yep. I'll just create a replication of the data I think
[12:05:41] <layke> And slowly migrate until I decide that I've made the right decision
[12:06:27] <kali> and if you're only considering a key value access, riak might be a better option than mongodb (which should be larginally better than mysql because of the absence of transaction overhead)
[12:06:38] <kali> s/larginally/marginally/
[12:06:43] <kali> YMMV :)
[12:06:51] <layke> This for example: https://gist.github.com/Kalyse/05a25c888dd9a364f69e spans so many tables to build. The only query I think I've ever made for mysql is to just query by the ID. :)
[12:06:59] <layke> Cheers for the sounding board :)
[12:08:33] <layke> I save to mysql. Then query and build a large JSON object to mvoe it around my application. It's hilarious :)
[15:09:44] <chetandhembre> what is best way to optimized write query ? I am getting about 7000 connection open for write only..
[15:10:03] <chetandhembre> can some one help me ?
[15:10:51] <chetandhembre> I am using java mongoldb driver ? and using bulk insert method
[16:02:49] <chetandhembre> is insert on collection is atomic ?
[16:03:36] <chetandhembre> i mean if i tried to insert list of dbobject and any error occurs while adding any document will other document insert will get affected ?
[20:46:57] <tim_t> Part of my code requires emailing. Do you think it is better practice to put messages into a mongodb "outbox" collection and have a daemon thread check for new messages. pick them up, send them and clear the outbox, or should I implement some sort of queue somewhere else?
[20:47:28] <cheeser> i'd probably use mongo for that
[20:47:39] <tim_t> like… polling the mongo outbox every minute or something like thar
[20:47:40] <tim_t> that
[20:47:46] <cheeser> you'd be implementing a queue either way. unless you used something like rabbitmq.
[20:48:14] <cheeser> or a capped collection with a tailable cursor ...
[20:48:47] <tim_t> right. i just was wondering about how much mongodb would like/dislike polling like that
[20:48:59] <cheeser> wouldn't mind it a bit.
[20:49:00] <tim_t> or what downsides there might be
[20:49:07] <tim_t> okay great
[20:51:11] <tim_t> thanks for the pointers
[21:21:20] <japhar81> hey guys, what's the recommended model for 'joining' unbounded lists? I have a collection of questions, and a collection of users that may answer one or more, and i need to pull unanswered questions by user.. im headed down answered question ids as a list on the user, but that seems like unbounded list hell, the qeustions list will be >1k
[23:42:43] <styles> japhar81, what's up?
[23:42:53] <styles> You have a questions collection and a answers collection?