[01:11:39] <macwinner> hi, any pointers on good example of how to organize mongoose model code where some of the models have subdocuments?
[09:27:27] <macwinner> does wiretiger compress gridfs collections?
[09:32:57] <nixstt> I keep getting session.commit_transaction: memory allocation: Cannot allocate memory on a 4gb ram vps, I set wiredtiger to only use 2GB ram, nothing else runs on this server
[09:34:04] <nixstt> Even if I set it to only use 1GB (which seems to be the min.) it still happens
[10:12:41] <pamp> I am to have low performance in a cluster in the azure, I must move the cluster to AWS?
[10:12:53] <fontanon> Hi everybody ... is there a way to keep my unique keys in a collection when sharding it? Mongo complains because in order to keep unique keys that keys must figure in the shard key.
[10:13:19] <pamp> performance will improve in the AWS cloud?
[10:30:44] <joannac> pamp: um, no. figure out why the performance is bad.
[10:31:01] <joannac> i don't think it'll be a azure vs aws problem
[10:53:58] <arussel> I have a replicat set with M1 (primary), M2 (secondary) and M3 (arbiter). Can I assume that if my application connect exclusively to M1, then it will work as expected
[10:54:12] <arussel> but if it connect exclusively to M2 it will fail
[11:14:05] <cheeser> arussel: if you connect directly to M2, the driver will find its primary and connect to that one as well.
[13:27:14] <cheeser> perhaps this is what you want: http://docs.mongodb.org/v2.6/reference/configuration-options/#storage.journal.enabled
[13:28:31] <fatmcgav> afternoon all... I'm trying to set-up MongoDB using Puppet, however I think I've found an issue with the Mongo-Shell which is making life difficult...
[13:29:26] <fatmcgav> i'm trying to use 'db.runCommand' to createUser, but am getting a 'auth failed' response... however the exit code is 0
[13:31:12] <nixstt> cheeser: how much memory would I need for wiredtiger in 3.0.2? I always ran mongodb on 512mb instances with no problem
[13:31:35] <nixstt> now when I try to add a replicaset member (4gb ram) it fails after startup2
[13:33:52] <fatmcgav> @cheeser: I've had a read of the db.createUser doc already, and I know the create is correct... I've got some permission issue that I'm trying to track down...
[13:34:09] <fatmcgav> however the issue at hand is around what appears to be an incorrect exit-code from mongo shell...
[13:37:08] <fatmcgav> i'd expect a non-zero exit code on the last command...
[13:43:08] <[diecast]> hey everyone, i'm new to mongodb and have a question about how we're currently setting up our database
[13:43:27] <[diecast]> the collections are deleted, then new ones are created and documents are inserted
[13:44:05] <[diecast]> it occurred to me that this might not be the best approach but wanted to ask if that is a common practice?
[13:45:49] <StephenLynx> why are the collections deleted?
[13:46:10] <StephenLynx> what are your requirements that lead to this practice?
[13:49:09] <[diecast]> from what I understand the collections are deleted for simplicity on the mongo management scripts
[13:49:35] <[diecast]> so that the persons who set it up only needed to have lists of db/collections/documents and run shell scripts to iterate over them
[13:50:05] <[diecast]> so all applications are stopped first, then the database is basically re-installed
[13:50:38] <StephenLynx> yeah, whoever designed that was on crack.
[13:50:42] <StephenLynx> that is not a common practice.
[13:51:23] <StephenLynx> you don't use a db as a temporary file.
[13:51:56] <[diecast]> i've created some ansible tasks that instead look to see if the collection exists and then will inspect the documents to see if they are different from what the new document contains
[13:52:27] <[diecast]> if the collection doesnt exist then it will be created, does that sound right?
[14:01:03] <carif> i recently upgraded the ubuntu 'mongodb-org' meta package from 2.8 to 3.0.2 using mongodb's package repo 'http://repo.mongodb.org/apt/ubuntu/dists/trusty/'. As I understand it, the mongo configuration file format is now yaml, but my /etc/mongod.conf is still in the older ini format.
[14:01:43] <deathanchor> hmm... mongo cfg servers don't need much resources right? What's usually the first bottleneck for a cfg?
[14:01:53] <carif> Does 3.0.2 use the yaml format?
[14:02:16] <nixstt> carif: both formats worked for me but I just changed to yaml
[14:12:21] <carif> nixstt, by "change to yaml" do you mean you transcribed the contents of /etc/mongod.conf into a new mongod.conf that was in yaml format? In other words, you did it yourself?
[14:14:05] <carif> vg, ty, good pointer; I'm still hoping to confirm if the mongodb guys did it for me
[14:19:24] <carif> i just broke apart the .deb, it arrives in the old format
[14:20:45] <nixstt> I was wondering as well it didn’t ask me to replace the config file when I upgraded
[14:21:04] <nixstt> I replaced it myself I like the yaml format, wasn’t that hard to do
[15:16:03] <naiquevin> Hi, I am facing a problem where on dropping a database, the result says that it's dropped but the db is still there although it's shown as (empty). Mongodb version 2.4.12. Any ideas?
[15:24:56] <naiquevin> not sharded. To provide more context, dropDatabase is issued in the "teardown" of a test suite.
[15:39:09] <cheeser> naiquevin: are you issueing any commands aganst that database after? what are you doing to see that it's still there?
[16:23:10] <dberry> what is the correct order for upgrading from 2.4 to 2.6 in a sharded replica set?
[16:23:59] <dberry> can I upgrade the secondaries first or do I need to go mongos, configsvrs, then mongod
[17:14:00] <gothos> Hello, anyone here familiar with casbah? I'm trying to extract data from an array that has some object in it, it looks like this: "perfData" : [ { "pi" : "3.1" , "minute" : "3"}, ... ]
[17:14:31] <gothos> I'm getting a MongoDBList but can't find a way to get the object within, any recommendations? :)
[17:39:02] <itisit> anyone has good example of puppet modules for deploying mongo which let multi instances can be added to replica set?
[17:50:47] <wavded> I have three mongo 2.6.x boxes that we are going to retire and use three new 3.0.x boxes. Do I need to upgrade the old ones to 3.x or can I replicate and make the new 3.x boxes the PRIMARY and then shut down the others?
[18:01:08] <deathanchor> eh, it's a rolling upgrade that will take a while
[18:06:50] <cheeser> if you can get to 2.4 on your own, mms can take you the rest of the way.
[18:49:36] <revoohc> anyone have there mongodb.conf file renamed to mongod.conf during 2.4->2.6 upgrade?
[19:00:10] <int3nsity> hi guys =), got a question. I'm doing the model for a db. Employers have jobs. Don't know if nest it in the same collection or do another collection for Jobs, with Id directing to the employer
[19:06:47] <StephenLynx> I had forums and threads and posts.
[19:06:47] <cheeser> a document can only grow to 16MB so you want ot make sure you won't outgrow that if you embed documents.
[19:07:00] <cheeser> there are also querying and usage to consider.
[19:07:17] <StephenLynx> because of the limit and querying sub-arrays has its own limitations, I created separate collections for threads and posts.
[19:07:32] <StephenLynx> but forums also has a list of mods, settings and another things.
[19:08:04] <StephenLynx> since these I don't need to read just a few and I dont expect to have more than 10 or so on each
[19:08:10] <StephenLynx> I kept these as sub-arrays on forums.
[19:08:40] <GothAlice> StephenLynx: The only limit you'll encounter on those sub-arrays is when there are more than 3 million words of replies. (That's a lot of words.) Embedding replies within the thread is one of the most interesting cases of denormalization and optimization for data locality, I feel.
[19:08:55] <StephenLynx> that was just one of my reasons.
[19:09:00] <GothAlice> (And when you hit that limit, it's pretty trivial to add a special type of reply to the end that links it to a new thread.)
[19:09:11] <StephenLynx> I usually want to just read some of the data
[19:09:19] <StephenLynx> and with threads I want to sort them too.
[19:09:38] <StephenLynx> doing that with sub-documents seemed to be worse than with separate collections.
[19:09:57] <GothAlice> Except suddenly you have way more queries to issue in order to render any given page.
[19:10:37] <StephenLynx> I worked with that in mind so I wouldnt have to perform any additional queries. I didn't took that decision ligthly.
[19:11:09] <StephenLynx> it was after some deliberation that I refactored it to use separate collections instead of sub-arrays.
[19:11:55] <StephenLynx> keeping in mind that I didn't used an ODM nor mongo's field reference, so there are no hidden queries.
[19:12:01] <GothAlice> User requests page three of the replies to thread X in forum Y. To render the page you need to: load the forum, load the thread, load the paginated replies. If you've denormalized the details that will be rendered for any given thread into the replies you've duplicated a fair amount of data, and must issue additional updates to keep those "caches" updated. If you didn't, well, three queries.
[19:14:04] <StephenLynx> and since the forum and thread identification never change, I don't have to update anything to keep track of that.
[19:14:20] <StephenLynx> want to see my code where I list posts?
[19:14:59] <StephenLynx> ah, theres another detail you are forgetting.
[19:15:18] <StephenLynx> my back-end is purely MVC. It just outputs json.
[19:15:45] <StephenLynx> I will never output both the forum along with the posts.
[19:17:05] <StephenLynx> Ah, indeed I do a second query for the thread data. Because when the user refreshes it, thread data like time of last post changes.
[19:18:30] <StephenLynx> but using unwind to handle the thread posts was not possible because a thread can have zero posts.
[19:21:13] <StephenLynx> so anyway I would need to make a second query to get the thread data, since I don't have a guarantee it will have posts and will output its own data.
[19:22:12] <StephenLynx> so yeah, while using fake relations for everything is suicidal with mongo, there are pretty valid cases for it.
[19:44:06] <int3nsity> it's better to get to know mongo or go for a DB solution like dynamoDB?
[19:51:52] <StephenLynx> yeah, int3nsity, if you don't want to be amazon's bitch, don't use it.
[19:52:45] <StephenLynx> for a rule of thumb, if you depend on a certain company's service to use a certain product, you should not use the product. It is designed just to keep you hooked to the company's service.
[19:53:15] <StephenLynx> and then obviously the product is proprietary.
[19:53:25] <StephenLynx> so they could literally do anything to it and you wouldn't even know.
[19:56:20] <StephenLynx> not to mention it will never be as good as software that is created by the community's contribution.
[20:31:01] <pjammer> can you control resident memory
[20:31:47] <sadmac> pjammer: like... with his mind? XD
[20:33:02] <pjammer> maybe i'm too far down the rabbit hole for what i need to do.
[20:38:52] <PedroDiogo> whats your guys input on Azure? I'm eligible for Microsoft's bizpark program, so I'm thinking of using one of their VPS to deploy MongoDB using MMS
[20:49:47] <StephenLynx> my forumhub is up on google play :3 https://play.google.com/store/apps/details?id=com.cheshire.lynxhub
[20:49:56] <StephenLynx> if you guys want to check it out, I use mongo for it.
[21:32:16] <krisfremen> PedroDiogo: more problems than it's worth it.. at least in my experience
[21:32:37] <krisfremen> PedroDiogo: AWS is much more stable and straightforward
[21:38:57] <PedroDiogo> what kind of problems? did you use it with MMS ?
[21:47:20] <krisfremen> PedroDiogo: not with mongo, with couchdb and mysql, it doesn't really deliver on performance. this was about 6-7 months back. stuff could've changed by now though. cloud is moving faster than I can keep up
[21:47:51] <krisfremen> performance was subpar compared to gce and ec2
[21:49:00] <krisfremen> ram and cpu were the issue. for disk the difference wasn't as great
[21:49:11] <PedroDiogo> hm, ok! tks for the input! ;)
[21:49:25] <PedroDiogo> i think it is actually cheap when it comes to GB/$
[21:49:47] <krisfremen> it was the cheapest at the time, iirc
[21:57:40] <Streemo> Is there a way to query a denormlized collection and have only desired collections come back?
[21:59:00] <Streemo> example: {field: {don't want},{want},{want},{don't want},{don't want}}. Do a query so that i get back {field: {want}, {want}}. like projecting out the ones that don't mathc my query
[22:01:42] <Streemo> I don't think a basic projection is what i am talking about
[22:02:20] <Streemo> I'm thinking more of a projection of array elements, kind of like elemMatch but allowing all array elements which match my query, not just the first one
[22:05:03] <Boomtime> Streemo: you can't select "all" matching array elements at this time, i think there is a server feature request open for that
[22:05:29] <Streemo> would you recommend using aggregation as a workaround?
[22:05:42] <Boomtime> your choices are basically to select only the first element that matched, or the whole array
[22:05:57] <Boomtime> yes, aggregation can do exactly what you want
[22:06:47] <Boomtime> i recommend using two $match stages, use the first one to filter to just those documents which contain a match, then $unwind and $match again
[22:07:08] <Boomtime> the reason is because only the first $match in this case can use an index
[23:17:57] <harttho> Boomtime: Twas a caching issue, thanks
[23:18:49] <Gevox> Boomtime: Can you please look at the result of the db.Events.find() - do you see? the record i'm looking for using db.Events.find({title: "Title #1"}) exists, though you said it does not return anything
[23:19:28] <Boomtime> how do you know the record exists?
[23:19:41] <Gevox> because its there at the list of results i got from the .find()
[23:19:47] <Gevox> it showed me all the records in the db already
[23:32:56] <Boomtime> are you trying to learn mongodb and java simulataneously?
[23:33:37] <Boomtime> what you printed looks about right, but i'm not big on java sorry
[23:35:00] <Gevox> Boomtime: only mongodb, but i don't use exceptions usually, the compiler handles them for me
[23:35:10] <Gevox> i know them, but i don't paractice them much
[23:36:00] <Boomtime> there is no compiler in the world that can handle exceptions for you, that is contradiction in terms, exceptions are exceptional, you need to handle them or your program will simply crash when they happen
[23:47:31] <d4rklit3> if i have a collection, Project, Project.categories = [ObjectID(abc1234),...], How do i query Where(project.categories contains category.slug === 'digital') or something to that effect?
[23:52:30] <d4rklit3> it seems like i need 2 steps for this
[23:53:02] <Gevox> <d4rklit3>: I'm day mongoDB guy, but i think you can it through doing a search for key category, then take that as output for a 2nd method that looks after slug in this category. I can write some code if you want