[10:37:34] <Milos_> of course, when inserting new comments, I first have to count if there are enough comments embedded witin post, and then if needed create a new comment document in comments collection
[10:38:06] <Milos_> but I think it's justified, because reading is by far more frequent operation than commenting
[11:35:37] <wieshka> MongoDB instance where rs.initiate() is called, will be as data source for replica set ?
[12:13:49] <wieshka> I am trying to migrate from Master->Slave to ReplicaSet with 3 members. When i reconfigure master to replset, and try to connect by application, i got: MongoError: Error: not master and slaveOk=false
[13:58:56] <algernon> try running /etc/init.d/mongodb start by hand, perhaps with bash -x. also, checking the logs (/var/log/mongodb/mongodb.log) may help figuring out what the problem is
[15:16:17] <richthegeek> I have a system which will regularly (like, once every 20-100ms) need to get the first row from a sorted cursor... would it be ok to keep a cursor "open" for as long as possible, or would that not work?
[15:16:40] <richthegeek> eg, each time I need an object just call cursor.nextObject()
[17:10:53] <hems> Hello, when looking my "database" size, it seems waaaay bigger than the size of it's children collections summed up together.. is there a way to reduce that? seems like it keeps a history of all old records?
[17:13:30] <Nodex> you can compact but it's not advisable on a live database
[17:13:45] <hems> Nodex, but wouldn't compact reduce my read/write speed?
[17:13:52] <hems> Nodex, am still on staging, so its OK
[17:14:19] <Nodex> why on earth would compacting reduce read/write speed?
[17:14:26] <hems> but its weird, like it says my DB size is 192MB, and if i get inside of the database all the collections summed aren't no even 10mb
[17:17:37] <hems> Nodex, cool, thanks for the tips. i have seen the compact and repairDatabase commands but yeah, they seems a bit "overkilling", i'll let mongo do whatever it wants, and yeah, hope it doesn't become a problem in the future
[17:17:47] <Nodex> not a lot, just got back from Holiday :D
[17:23:13] <hems> well, its probably on their agenda already. put a tower down, blame the internet and push it to be super legal and convince the americans its good for them
[17:23:18] <Nodex> if they want to waste CPU cycles decrypting my uploaded SSH keys then go right ahead, I got nothing to hide LOL
[17:23:33] <Zelest> Nodex, usually they do sociograms
[17:32:55] <double_p> Zelest: ah right.. just reminds me of some 1+TB oracle about flight data over germany.. mhmm.. damn those USB sticks havent been quite big enough back then - lol
[18:00:56] <hems> Nodex, have you ever cleaned your opLog ? http://docs.mongodb.org/manual/tutorial/change-oplog-size/
[18:01:12] <hems> Nodex, sounds like a lot of my data storage might actually be coming from the oplog
[18:28:05] <ghost1> Hi everyone, I was wondering if there is anyone that can answer a quick question for me. I am currently working on a project to replace an existing legacy system. The legacy system has well over 6+ TB of data how would we benefit from using something like mongo? Can mongo be the primary storage location for the data? What are some instances and which SQL would be better suited thanks.
[18:37:46] <ghost1> Ok, maybe a simpler questions … what scenarios is mongodb better suited for?
[18:54:43] <TommyCox> ghost1: Is the data tied together with relationships and foreign keys? If so, there would be a de-normilization process you'd have to go through that could get pretty daunting
[18:55:39] <ghost1> Hmm, yea there are a total of 136 table across 16 schemas
[19:22:38] <ghost1> Is there a limit to the amount of data that mongo db can handle???
[19:23:10] <mediocretes> yes, but it's not super practical
[19:32:22] <Hochmeister> how do you project an ObjectID value when using a group aggregator? I want each group to have it's own ObjectId.
[19:41:16] <konr_revmob> How can I query for, say, "animal" matching either "dog", "cat", "mus" etc? Is there a better way than {$or [{"animal":"dog"}, ...]}?
[19:47:26] <bmcgee> hey guys, I'm having difficulty figuring out how to put together an aggregation. I have a boolean field in my doc, I want to count the number of documents in which it is true. Any thoughts?
[19:49:38] <mediocretes> is there a reason you need it to be an aggregation?
[20:08:12] <diegows> is it possible to perform queries doing comparision between fields of the same document?
[20:08:28] <diegows> I want to get the documents where field1 < field2
[20:08:43] <bmcgee> Hochmeister: thx but i went with count in the end, seems like aggregate is overkill
[20:09:42] <Hochmeister> perhaps, unless you are making a bunch of queries independent of eachother.
[20:09:50] <jblack> Hi. I'm trying to figure out why "yard doc" doesn't seem to be using yard-mongoid when generating docs. Can anyone suggest a useful place for me to go to get help figuring out what's wrong?
[20:59:19] <Vishnevskiy> Hello, I am wondering why when I upgrade to PyMongo 2.5 I get connection leak (climbed to 3000 open in 30minutes) while if I downgrade backdown to 2.3 Its stable at 140. (using gevent)
[20:59:28] <Vishnevskiy> Any ideas would be greatly appreciated
[21:00:54] <yosyp> Is it preferable to have a lot of articles with the same schema, or one large article containing multiple sections with similar data? (think one large forum with subforums)?
[21:01:28] <konr_revmob> So I've got {"location": [{"code": "US"}, {"code": }]}
[21:02:59] <konr_revmob> So I've got items like {"location": [{"code": "US"}, {"code": "UK"}]}. How can I query for elements whose code (inside location) is in a list? will {"location.code": {"$in": ["UK", "BR", "DE"]}} work?
[21:03:17] <kali> Vishnevskiy: ha... better stay on the stable versions (2.2 and 2.4) for a start
[21:03:44] <kali> Vishnevskiy: ha, my apologies :)
[21:05:05] <konr_revmob> No, of course not, there is a vector between the fields. Is there any workaround?
[21:05:38] <kali> yosyp: you need to pick a model that will fit your query patterns.
[21:05:54] <kali> konr_revmob: i'm pretty sure it does work. have you tried it ?
[21:07:52] <konr_revmob> kali: oops, that was a typo, then :)
[21:19:13] <konr_revmob> And finally, dear friends, I do have a foreign key at {:product "bla", :owner_id "42"}, referencing {:_id 42, :species "capybara"}. Can I search for products whose owners are capybaras in a single expression?
[21:23:44] <harenson> konr_revmob: MongoDB is not relational
[21:24:22] <konr_revmob> harenson: just as I feared!
[22:00:30] <Hochmeister> how I give a uniquie identifyer to a result of an aggregation pipeline? I want each group to have a unique identifier that I can pass to client JavaScript. http://pastebin.com/smC1FyJ0
[22:02:06] <Hochmeister> the problem is, in my real app (not the simplified example) I'm grouping by a url. Then in my JavaScript I'm building DOM elements using that _id value that comes back from the aggregation pipeline. However, I cannot decorate links and such with the url value (it also gets injected into the window.location.hash).