[00:00:32] <iNick> trying to find a way back out. guess I could recreate the replset info if needed
[00:01:04] <redsand> you can import the conf you backup using the command line
[00:01:11] <redsand> best i got for you, maybe someone else knows more
[00:03:35] <iNick> redsand: understood. so you're not sure where the replset info is physically stored. no problem :) I can recreate the replset, and import the main db
[06:32:52] <TTilus> a db design consideration: say ive got book, which has embedded chapters and i now want to add themes, which would reference book and have many-to-many with pages
[06:34:15] <TTilus> how do i reference an embedded document?
[08:51:38] <remonvv> Anyone opinions about the ideal amount of (max) connecties from app servers to mongos's?
[08:52:24] <remonvv> We're seeing a rather spectacular degradation in performance when configuring a high (100) number of connection to mongos
[08:55:54] <TTilus> a db design consideration: say ive got book, which has embedded chapters and i now want to add themes, which would reference book and have many-to-many with pages
[08:58:46] <Nodex> is there a question in there somewhere?
[08:59:01] <Nodex> remonvv : is it read degredation or write?
[08:59:55] <TTilus> what possibilities i have to extend what i already have to include themes?
[09:02:09] <TTilus> coming from rdbms world my first reaction would be to extract chapters from book and introduce many-to-many between chapters and themes
[09:02:37] <TTilus> and sorry, i said "pages" earlier when i ment chapters
[09:03:25] <TTilus> but i assume there surely are other options to implement this
[09:05:55] <TTilus> i would never need to look up for themes or chapters, but always find a book and a chapter inside it and then themes of that chapter
[09:06:14] <TTilus> maybe sometimes chapters of a theme
[09:08:17] <Nodex> it really depends on your access patterns
[09:08:34] <Nodex> what you query for, how often you query it
[09:09:29] <TTilus> i query for books by couple of unique indices
[09:10:17] <crudson> TTilus: if you're dealing with books and chapters, it sounds like separate entities may be worthwhile due to size. Your themes seem to be independent of books, keep them separate too if they can apply to many books. Not totally sure I understand your datamodel...
[09:10:46] <TTilus> this is a made up terminology, i cant use real, sorry for that :(
[09:11:19] <TTilus> crudson: its only metadata, not the actual content
[09:11:19] <Gargoyle> TTilus: And how big are your books, chapters and themes?
[09:11:28] <Gargoyle> A mongo doc can be upto 16MB
[09:11:59] <Gargoyle> For reference, the entire works of shakespere in plain text = 5 meg.
[09:12:01] <TTilus> Gargoyle: one book has 10-100 chapters and 5-30 themes
[09:13:12] <Gargoyle> crudson: That comes down to app. I would hope that it goes without saying, that if you want a list of just titles, you don't load everything from every doc!
[09:35:37] <kali> jgorset: i think it actually performs these checks since 2.4
[09:36:19] <jgorset> I see, okay. I've found two keys by the same name in my document, and it's wreaking havoc on my ORM (Mongoid).
[09:37:01] <kali> yeah. with ruby, there was an issue where you could write { :a => 12, "a" => 42 } and save this
[09:37:31] <kali> the driver was converting symbols to strings, and so storing a document with two key named "a'
[09:37:31] <jgorset> I'm not sure how they came to be, though, because I can't seem to be able to create a document with duplicate keys in the mongo shell.
[09:37:44] <kali> but i think the issue has been fixed in the driver also
[10:00:51] <TTilus> Gargoyle, crudson, Nodex: i inherited a codebase which already has books and chapters embedded inside them
[10:02:13] <TTilus> what i need is "themes" or "tags" (which judging by guidance in http://docs.mongodb.org/manual/core/data-modeling/ should be embedded inside book too) which would have many-to-many with chapters
[10:03:53] <TTilus> having a themes field (with list of theme ids) in chapter document would do the job for me
[10:04:24] <TTilus> does that sound awkward/convoluted/bad?
[10:06:15] <Nodex> it depends what you're trying to accomplish to be honest
[10:07:09] <Nodex> if each book is a single document with embedded chapters and each "theme" relates to that chapter then you should probably embedd the theme or tag with the chapter
[10:07:31] <Nodex> if you need a list of all "themes" then you should keep a separate document with an array in it that has all the chapters
[10:08:45] <Gargoyle> Although, you can also very quickly build such a list in "application land" if it makes the DB easier to manage.
[10:14:21] <majoh> hey, I'm trying to do a aggregate on 2.2.3, I have an array that I want to project, but only the first item in that array. I've tried {$project: {"interesting": "$thearray.0"}} etc... any ideas?
[11:21:29] <Nodex> I must say, windows 8 is a pile of sh**
[11:35:32] <remonvv> NodeX, sorry, was afk. It's both degraded and drivers occasionally complain about not being able to establish connections.
[11:36:50] <remonvv> It seems to be that the more connections you have to mongos (and thus the shard mongod's) the worse performance gets. Might be a Java driver issue as well since the network code in that won't win any prizes either.
[11:40:17] <`3rdEden> Nodex: why use window 8 in the first place?
[11:54:08] <Nodex> `3rdEden : New laptop = don't have a choice as I am not paying for another winblows licence
[13:23:16] <richthegeek> I'm having something of an issue with read speeds - I have a collection with ~200k rows in it, and I want to know the ID of all rows in the set. Unfortunately, the list is only building it about ~500 rows per second. This seems very slow?
[13:42:10] <richthegeek> hmm, that shouldn't be hitting the track table that much anyway
[13:42:51] <richthegeek> ok, performance has picked up now (was at ~300rows/sec, now at ~900)
[13:43:06] <richthegeek> just passed 1000, no idea what's going on...
[13:43:26] <richthegeek> max out at 1382 towards the end
[13:44:23] <richthegeek> which is 1382 (read from orders, do some processing, write to orders_big, write to track) per second
[13:44:27] <richthegeek> not too shabby in the end
[13:54:24] <ThePrimeMedian> morning everyone. i've googled myself out - thought I would ask someone in here: Anyone done an activity collection? (like facebook's feed) or something of the sort? I am having a brain-fart and cannot wrap myself around the db design (like do I enter raw data or references, etc)
[14:24:41] <theRoUS> i have a mongodb instance with dbpath=/srv/mongo and directoryperdb=true. /srv/mongo is a mounted filesystem, so it has a 'lost+found' directory. mongod is crapping out on startup because it's trying to treat that as a database directory.
[14:25:12] <theRoUS> is there any way to tell mongod to ignore lost+found? or do i need to go to directoryperdb=false (suboptimal) ?
[14:47:08] <starfly> kfox1111: you could use a "driver" collection that the tailers check regularly and persist which collection they should use. When you want them to switch to a new capped collection, update the driver collection to indicate that.
[14:49:34] <theRoUS> is there any way to tell mongod to ignore lost+found? i mean, having its dbpath be the top of a filesystem can't be that uncommon..
[14:50:37] <starfly> theRoUS: can't you mkdir -p /srv/mongo/data and use that instead for the mongod dbpath?
[14:51:04] <double_p> theRoUS: ignore? i've such a setup and i dont have problems about it
[14:52:12] <double_p> just to check.. nothing about lost+found mentioned in logfile aswell
[14:52:24] <orospakr> data model question: in order to avoid classic N+1 problems, is doing aggregation of (similar in notion to a SQL join) the related documents together by means of a incremental map/reduce to a "join" collection reasonable? eg., in couchdb, I'd do it with a view in my design document.
[14:54:08] <double_p> oh, but i dont have directoryperdb. sorry :]
[15:01:51] <theRoUS> starfly: i *can*, but it's going to mean making some puppet changes..
[15:15:08] <starfly> theRoUS: maybe just create a symbolic link to such a directory, e.g. mkdir -p /srv/mongo/data (as mongo user); cd /; ln -s /srv/mongo/data mongodata (as root user) -- then use /mongodata as dbpath and change puppet once, when/if you need to relocate your MongoDB data directory in the future, take an outage, move MongoDB files to the new location, modify the symlink, and restart mongod.
[15:15:54] <double_p> is mongod failing hard, if you make that dir unreadable for mongod?
[15:15:57] <starfly> theRoUS: modify the symlink means dropping existing and creating a new one
[17:50:29] <bhangm> I was just hoping to find some way to work around the sort + $or limitation
[17:52:28] <kali> bhangm: you just can't optimize this with a btree
[17:53:18] <kali> this does not mean you don't have options. can you show me one example docuemnt and the query you're trying to optimize ?
[18:15:49] <bhangm> kali: The query itself is not the problem, with an index on each of the $or expressions it's able to use them in parallel and comes back pretty quickly
[18:16:22] <bhangm> however, tacking a sort onto that kills the performance since it's almost essentially doing a full scan
[19:20:12] <baniir> I'm getting "WARNING: Readahead for /home/mongod/data is set to 1024KB" though I'm not sure why as I followed the steps at http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-amazon-ec2/
[19:27:36] <ehershey> doesn't the warning give you a URL?
[19:53:46] <baniir> it does: http://dochub.mongodb.org/core/readahead
[23:12:22] <llakey> is there a way to connect to mongo somehow and see all of the commands/queries that are being issued against mongo, something similar to `redis-cli monitor`
[23:37:54] <federated_life> possible to force initial sync from a specific member of a replica set ?