[02:38:36] <hackel> Why does the positional $ operator only return one result? How can I filter the contents of an array and include *all* the matching elements?
[02:40:38] <joannac> hackel: that's the way it's implemented
[02:41:03] <joannac> if you need multiple results from a single array, your array elements are better suited to be top level documents
[02:41:17] <hackel> Yes, I get that, but why? What is the use case? It just seems so arbitrary.
[02:42:40] <hackel> joannac: I am attempting to filter out subdocuments in an array when deleted_at is not null. I do want them to remain subdocuments, though.
[02:44:21] <joannac> hackel: why do they need to be subdocs?
[02:45:08] <hackel> joannac: Because I'm using Mongo, that's the whole point.
[02:45:52] <hackel> Yes, I could make everything a separate collection, but in that case I might as well use SQL and not have to deal with all this headache to accomplish simple tasks.
[02:55:48] <haole> where do I get a list of the support events in MongoDB's Node.js driver? like 'fullsetup' and 'close'
[03:21:31] <morenoh149> make sure you use the right docs. I got really pissed off when I started out doing mongod from the node driver. I was looking at the 2.0 docs the whole time -.-
[04:40:03] <benson> is it possible to install just mongodump? I want to periodically backup a remote database but dont want to install mongodb on the server doing the backing up
[04:43:33] <Boomtime> you could just copy the binary from your existing install..
[04:44:17] <Boomtime> but i don't think there is a way to install a single particular tool from the set
[04:44:37] <benson> theres no issue with paths/dependencies doing that?
[04:45:41] <Boomtime> there might be, that would depend on the status of your destination machine - if you want to be sure you can do the do the fake install trick and determine if it would pull in other packages
[04:47:35] <benson> you mean the simulate apt-get install?
[04:48:00] <Boomtime> yeah, i can't remember the option, but you know the one
[08:01:14] <queretaro> Hi, does anyone of you use MongoDB as a backend for Adobe AEM?
[10:30:57] <brokenpipe> I just installed mongoDb and configured /etc/mongo.config file with correct permissions and when I start app does not read this file
[10:32:59] <brokenpipe> error everytime is dbpath=/srv does not work
[10:33:25] <brokenpipe> I created with permissions and changed to another and is same error
[11:14:04] <guest9999> hello. just wondering. why dont mongodb have different yum repositories for previous releases? e.g. 2.4. so when i create and bootstrap new server it doesnt automatically got to 2.6?
[13:36:32] <drager> when I have some more space; /dev/vda1 20G 7.8G 11G 42% /
[13:38:35] <kali> you can use the smallfiles option to reduce the preallocation quantum
[13:38:47] <kali> it's not necessarily a good idea for production
[13:53:49] <Sticky> re mongo preallocation, can mongo give any indication of how full its preallocated files are and when it is likely/close to growing?
[13:58:39] <Sticky> mongos inability to defrag when you remove records without doing a full resync is quite annoying. A few times when we have run out of disk space on servers then cleaned the db it was an issue
[14:53:46] <valera> hello, how much overhead initial sync has comparing to pre-seeding copying ?
[15:24:35] <jiffe> this is why authentication needs to be turned on by default
[15:26:16] <cheeser> no, this is why people need to think when putting software in to production.
[15:27:39] <Zelest> putting? i write my software in production :D
[15:27:51] <jiffe> cheeser: that is never going to happen and the software ultimately gets blamed for it
[15:28:20] <cheeser> consider it digital darwinsim.
[15:29:57] <jiffe> cheeser: I do, but not in the same context you are I'm guessing
[15:31:35] <StephenLynx> the default configurations make is secure so only local connections are accepted.
[15:31:50] <StephenLynx> and they didn't even tested for all those databases to see if they could actually access them.
[15:32:09] <StephenLynx> they were just "oh, it is open, lets assume it is open to anyone to access it"
[15:33:50] <jiffe> the only barrier to stop that access is auth and I'm willing to bet most of them don't run auth
[15:34:38] <StephenLynx> that and not taking external connections. which is turned on by default on install.
[15:35:28] <jiffe> but all those 40000 were accepting those connections, I'm sure they tried to access the db from another machine and when they couldn't they listened on all interfaces and then everything was working so no need to go further
[15:35:58] <StephenLynx> I dont know, Im yet to run the test they ran on a fresh install of mongo.
[15:36:19] <StephenLynx> did their test just checked for an open port or it actually tried to connect?
[15:36:24] <StephenLynx> I lost the link to the pdf.
[15:39:21] <Derick> I do think we should only bind to 127.0.0.1 by default though
[15:42:18] <Tausen> Hey! I'm a bit puzzled with some timing and hope someone can help me shed some light. I'm using aggregate through pymongo and filtering with $match to find documents where a field is in a list of strings. If I for example have 3 entries in the list and a lot of data in the collection, but no data matching those 3 entries, it is *much* faster to do three separate requests with only one element in the list than doing one request with all three in the list. Can
[15:42:18] <Tausen> mongodb not use the index I have on the field as efficiently in this case or something?
[15:43:23] <Sticky> having a sane default that non-localhost connections require auth, would probably have prevented a huge number of those misconfigurations. And should not be terribly difficult to implement
[15:45:18] <valera> what triggers switch of replica from stale to fatal state ?
[15:54:13] <Sticky> the argument that it is safe since it does not bind the public port is a bit off, people are used to existing db's that by default are secured (atleast the dbs I have used). Similarly web servers when asked to bind a public port will not expose their admin consoles unauthed to the world
[15:55:46] <Sticky> there is an expectation of safe by default by more than just binding the right ip, breaking that expectation is exposing your users to risk
[16:01:54] <StephenLynx> that is, indeed, a valid argument.
[16:02:26] <StephenLynx> doesn't change the fact the user has to screw up to make its db not secured, though.
[16:05:22] <Sticky> yeah, but for very little extra effort and inconvinience you could protect the majority of users. If you want to add an enableunauthedPublicAccess config param fine, then the few people who do want it have to explicitly request it for little effort
[16:07:40] <StephenLynx> again, a good argument and I'm sure they had a reason to not do that. You could try and open a ticket on jigra about it.
[16:09:33] <Sticky> tbh the fact that mongo does not make it easy to obtain an ssl'ed mongo is almost as bad as this issue as well
[16:09:49] <cheeser> that's been fixed in the latest releases, iirc
[16:09:57] <valera> what would be the correct way to re-sync replica in FATAL state ?
[16:10:12] <Sticky> cheeser: are they shipping an ssl'ed mongo now?
[16:10:25] <cheeser> i believe so. at least on the nonwindows builds.
[16:10:28] <StephenLynx> yeah, it was an issue with cross compiling or somethin.
[16:33:59] <AnnaGrey> StephenLynx: thought from what i read its a good orm
[16:34:13] <StephenLynx> personally I find it useless and bloated.
[16:35:17] <AnnaGrey> Will try everything out without mongoose
[16:36:56] <StephenLynx> do that, it is best to learn with the very minimum necessary.
[16:55:14] <hmsimha> I've been finding the documentation on TTL indexes a bit incomplete. If a TTL is set, say, for 3600 seconds (1 hour) on a 'lastUpdated' field that may get updated with some frequency, does that TTL countdown reset every time the field is updated?
[17:34:08] <hmsimha> If I'm getting documents from a collection in chunks of 1000 (`.limit(1000)`) and in between performing `collection.find().limit(1000)` and `collection.find().limit(1000).skip(1000)` one of the documents returned in the first query is deleted from the db, how can I prevent skipping over a document?
[17:35:33] <jiffe> Sticky: did you ticket default auth?
[17:51:52] <jiffe> hmsimha: instead of skipping you could sort by an ascending field and add a $gt filter
[17:54:02] <appledash> Hello... Is there any way to have a "Standalone" MongoDB server? Perhaps I am using the wrong term, but what I want to do is have an application that uses MongoDB as a database, but it spins up its own internal copy of MongoDB to use just for itself, with the data being storeed in a subfolder of the dir the application is in. Is this possible?
[17:55:29] <hmsimha> jiffe: thanks, but lets say I have 10000 documents numbered 1-10000 and I request `collection.find({someField: {$gte: 0}}).limit(1000)` to start off with, I guess on the server I need to store the last value of someField somewhere?
[17:55:31] <jiffe> the term you're looking for is embedded
[17:55:36] <appledash> If it helps, my application is Python
[18:15:54] <hmsimha> the docs list an example for index creation of a compound index: `db.products.ensureIndex( { item: 1, quantity: -1 } )`. How does index creation preserve the order of the fields if they're passed as object keys?
[18:19:35] <StephenLynx> I supose it just uses the sequence of the keys.
[18:19:42] <StephenLynx> objects still have an order for their keys
[18:20:47] <StephenLynx> if you get to print an index name, you will see it is something like field_1_anotherfield_1 or something. I don't remember too well. I know It would be in the error.err or something when you try to insert something that disrespects the compound index.
[18:26:05] <hmsimha> ah, found the answer: http://stackoverflow.com/questions/18514188/how-can-you-specify-the-order-of-properties-in-a-javascript-object-for-a-mongodb
[18:48:16] <ezakimak> I just did db.coll1.copyTo(db.coll2) and now when i show collections there's an entry "[object Object]", and the coll2 is still empty
[18:57:03] <ezakimak> ok, so schema question, i have on Person collection, and each person can have 0 or more roles. I implement the roles as subdocuments. would it possibly be better to split the roles out into their own collection? (classic collection vs subdocument question)
[19:03:24] <StephenLynx> you have this field that may have one structure or another
[19:03:29] <ezakimak> i didn't want two mechanism for representing roles, but didn't want to make a new collection for each role (because eventually i want to allow custom roles)
[19:20:48] <fewknow> okay you said a role = https://dash.metamarkets.com/magnetic_audience/explore_audience_searches#e=2015-01-19&p=custom&s=2015-01-12&zz=3
[19:31:32] <fewknow> will get bad query performance
[19:31:36] <ezakimak> i am using the terms from my ubiquitous language from the business model in my design
[19:31:56] <fewknow> then make a profile with properties
[19:32:18] <ezakimak> that is my original question, how much worse or better is it to leave these as subdocuments vs splitting them out into their own collection at the expense of now having to do joins and complicating searches
[19:34:45] <ezakimak> my "role" is a subset of the person's profile, with data specific to that role, if they have it, it's *still* part of the profile
[19:37:11] <ezakimak> a) keep it as one collection with subdocuments, b) split out each role into it's own collection, c) your idea of lumping all roles into a 2nd "roles" collection
[19:37:34] <ezakimak> i think (c) just adds complication w/o making anything better than (a)
[19:37:35] <fewknow> I have built something very similiar at scale
[22:07:58] <timmytool> Can someone here help me with a performance question about mongo?
[22:12:02] <timmytool> We recently upgraded our mongo servers from 2.4.10 to 2.6.7. We have been running fine for almost a year on the older version. After the upgrade we are having some serious performance problems. Based on what I can tell by looking at the currentOp statistcs, we have some queries that are taking a long time to run. The queries are fully indexed, I have verified this by doing an explain. Looking at the ops they have up to an hour with a
[22:12:03] <timmytool> timelockedMicro. The num yields is also very high for these queries. Is there any way I can get more information about what is going on? Thanks
[22:13:49] <fewknow> timmytool: do you use MMS? There are a lot of ways to see what is going on.
[22:14:00] <fewknow> How do you know your queries are fully indexed? did you check the logs
[22:14:13] <fewknow> unless you are hinting all queries you can't be certain
[22:14:48] <fewknow> you can use explain to see if the index is being used correctly
[22:14:52] <timmytool> fewknow: thanks. I’ve done an explain on the query and it is using an index. It is the same query that is taking a long time. There are many instances of it running with.
[22:16:08] <fewknow> you can run mongostat --discover
[22:16:14] <fewknow> and see how the replica set is performing
[22:17:27] <fewknow> are you sure it wasn't some code that was released that is inserting something that is causing the lock?
[22:17:39] <timmytool> fewknow: Thanks again. How do I determine replicat set performance from the output of mongo stat.
[22:17:53] <timmytool> Yes we did not release any code 1 week before and after the release
[22:18:03] <timmytool> 1 week before and after the upgrade ^^
[22:18:44] <timmytool> We do have a high number of inserts in our database, but we always have.
[22:19:15] <timmytool> I’m thinking that something in the newer version of mongo is yielding those locks more easily than the old versions.
[22:52:09] <AnnaGrey> Hey guys do you find this schema correct? http://pastie.org/9943076
[23:05:39] <jiffe> if you run a replicated or shareded setup on separate machines then you need to configure mongod to bind on an interface other than localhost right?