[06:27:24] <svm_invictvs> jezeniel: Isn't that in my paste
[06:42:07] <Boomtime> jezeniel: have you tried your query with .explain() to see what it is doing?
[09:20:08] <mbuf> what can cause this message, "Nothing to do. Either the server detected the possibility of another monitoring agent running, or no Hosts are configured on the MMS group."?
[09:20:46] <mbuf> I have only one monitoring agent running in a threenode replica set
[09:29:04] <mbuf> joannac, is it not possible to do it through a configuration file, or CLI, or using the monitoring or automation agents?
[09:30:36] <mbuf> joannac, at present, I see the three servers in the dashboard, but, they need to be associated with Hosts?
[09:31:51] <mbuf> joannac, which boils down to the question - is it possible to deploy mongod servers in AWS and just use the dashboard for monitoring without using its UI?
[09:42:12] <iszak__> How do people backup mongo? mongodump will be too slow, is there any incremental solution that is common place? most projects I found were unmaintained
[09:48:20] <joannac> mbuf: I don't understand the question.
[09:49:13] <mbuf> joannac, when I installed the automation and monitoring agents using Ansible (deployment), I was able to see the servers listed in the dashboard
[09:49:37] <mbuf> joannac, but, I don't see the mongod process information in the UI because there is no host to server mapping (I guess)
[09:50:10] <mbuf> joannac, without using the UI and creating this mapping, would it be possible to specify this mapping through a configuration file so that I don't need to use the UI?
[09:50:30] <joannac> mbuf: the MMS API e.g. https://docs.cloud.mongodb.com/reference/api/hosts/#create-a-new-host ?
[10:04:03] <mbuf> joannac, https://docs.cloud.mongodb.com/tutorial/add-hosts-to-monitoring/ asks for login credentials to communicate with the mongod instance? But, the mongod instances running in AWS don't have any credentials
[10:04:10] <iszak__> daslicht: echo $PATH; does it include /usr/bin ?
[10:04:29] <joannac> mbuf: then don't put any credentials in?
[10:04:37] <joannac> Pretty sure it says "optional"
[12:22:35] <ito> any eta for debian jessie support in the mongodb repository?
[12:25:51] <pokEarl> ok so potentially stupid question, but (in Java) When do you actually 'go into the database' so to speak. E.g if you create a DB collection, thats just a link to that collection in the database and the database is not affected by you creating collections right? But I guess if you create a cursor, that goes into the database and gets all the objects of that cursor, and then you do what you want with this cursor without going into the database again? and
[12:25:51] <pokEarl> getting out DBObjects or single values from the collection again would mean 'connecting' to the db again? hmm.
[12:44:20] <pokEarl> actually I guess a Cursor does not actually hold the objects themselves, just an open reference to where they are sort of hm.
[14:23:44] <dj3000> hi, i'm having some real troubles installing mongodb 3.0.x on ubuntu 15.04 . can someone please help me?
[14:24:08] <dj3000> I've followed the steps here: https://jira.mongodb.org/browse/SERVER-17742
[14:29:20] <pamp> Hi, Why I cant create Users, what is this error : "Error: couldn't add user: Could not lock auth data update lock"
[15:53:42] <StephenLynx> myArrayOfIds go into the value of $nin
[15:55:42] <Epichero> https://gist.github.com/Tyler-Anderson/798aa9e412237a453c35 line 15 i explain my problem better there.. i've been stumped on this for days
[15:56:22] <deathanchor> doesn't $nin take an array?
[15:57:01] <Epichero> i tried [id] to see if it would solve it but it doesn't
[15:59:42] <Epichero> thanks, i thought i tried that
[16:07:58] <Epichero> this shouldn't be possible... i'll come back after i work out what's going on here...
[17:06:42] <synthmeat> hey, a sort of general question - i intend to bring up one mongo instance for some sort of logging. i expect that to grow large, have fairly frequent writes, basically no reads, and while reliability isn't that important, i'd like ideally not ever cap it, just have it there to grow
[17:07:15] <synthmeat> long time ago i made a mistake not asking about advices on mongo modelling/setup and that cost me a lot :)
[17:07:27] <synthmeat> (i've used it as relational db more or less, many populates, ugh)
[17:08:20] <synthmeat> so, are there any advices, from the top of your hats?
[17:11:23] <preaction> why mongo if you don't intend to query? why not something less, like syslog/rsyslog/syslog-ng?
[17:12:05] <synthmeat> preaction: i intend do all sorts of queries, but they would not be user-facing, i'd just investigate dumps of it ocassionally
[17:12:46] <synthmeat> i like fairly powerful queries plus i get to link to them from user collection if need be
[17:13:08] <synthmeat> if it's such a bad choice for this, sure, do advise me against
[17:13:35] <synthmeat> i don't like to grow my stack unless absolutely neccessary though
[17:13:38] <preaction> it really depends on if you need all that if you're only going to do basic querying. are they already documents, or just lines of text?
[17:16:44] <synthmeat> preaction: it's ok, that's my expected answer. i just wanted to double-check. thank you :)
[17:16:58] <preaction> okay, why do you need another parser? it seems like this is a niche use-case, not a general json tool, which is already provided in the python core library
[17:18:20] <Petazz> preaction: I'm looking to speed things up using C-written json parser in python. I was wondering if anyone knows if that could be used with the other parser. Is the json module backing that up replacable with cjson for example?
[17:20:02] <preaction> it uses python's json module, as the docs say
[17:20:19] <preaction> but i'm still not clear on why you must specifically use the bson.json_util
[17:20:22] <Petazz> preaction: That I already know from reading the source :)
[17:20:45] <Petazz> Hmm, well ofc because I'm encoding and decoding extended json
[17:21:01] <preaction> it wasn't ofc. if it was ofc, i wouldn't have asked.
[17:21:21] <preaction> my guess is you'll need to monkeypatch
[17:21:29] <Petazz> Why else would anyone use another layer on top of the built in json module?
[17:21:56] <Petazz> Yea I guess that must be it. Seems like it's using the module's object hook to extend it
[17:22:05] <Petazz> Not sure if any library supports the same API
[17:22:11] <preaction> ignorance? incompetence? cargo-cult? existing code already uses it and they don't know why so they don't want to change it because it could affect everything everywhere?
[17:22:18] <preaction> lots of reasons to not change something that's already working
[17:32:58] <blizzow> Anyone here have a working logrotate config for mongo 3.x on ubuntu? copytruncate isn't working for me (mongo just writes to the rotated logfile name) and whenever I run a kill -SIGUSR1 $mongoPID, I get these long timestamped files that mongo creates on top of my regular log rotation.
[17:40:13] <svm_invictvs> I asked last night, but it was late
[17:40:21] <svm_invictvs> So, I'm having trouble with Morphia and references to other objects.
[17:40:30] <svm_invictvs> Specifically the query it's genreating to find objects by db ref
[17:41:55] <svm_invictvs> The reference query just seems to not work.
[18:26:34] <grinndsw> I keep reading that I shouldn't use MongoDB with relational data but then people will say it's create for a model where an author has posts that have comments...isn't that relational?
[18:30:50] <preaction> it is relational, but it's not expected that a comment must be attached to multiple posts. "belongs to" or "one -> many" relationships are generally fine
[18:31:23] <preaction> it just means that if you have relations, you need to deal with them yourself
[18:32:21] <grinndsw> Are there any benchmarks comparing the need for 2 querys vs a relational join>
[18:32:44] <grinndsw> And that makes a lot of sense - especially the part about "expecting"
[18:34:12] <preaction> how can you benchmark 2 entirely different things?
[18:34:29] <preaction> at least, and expect the results to make any kind of sense
[18:34:51] <grinndsw> Well - mongo constantly advertises speed so I'm just curious if having to do two queries to simulate a join is slower/faster than the traditional join
[18:35:14] <preaction> too many variables would make the results meaningless. benchmarking mysql against mongo is weird. do you take the SQL parser out of the equation? how? if so, how do you join?
[18:37:11] <svm_invictvs> grinndsw: Just becuase you're using Mongo doesn't mean you have to abandon all principles of relational data.
[18:37:18] <svm_invictvs> That's why we have references etc.
[18:42:19] <svm_invictvs> Nobody has an answer for my query issue?
[18:45:16] <grinndsw> svm_invictvs: I'm just trying to wrap my head around the valid use cases and a lot of the resources say to be document minded...but it's difficult to see relational data as documents and I can't reconcile speed differences (isn't that a massive motivating factor along with the impedance mismatch?)
[18:48:11] <grinndsw> preaction: That's a valid point - I'm not sure how that could be properly benchmarked. I guess that's where I just need to make sure to pay attention and moderate what I'm doing to see where I can normalize data for better read speeds.
[18:48:34] <preaction> yes. or denormalize it, if needed
[18:48:50] <preaction> or get first-level caches involved, like memcached
[18:48:57] <grinndsw> Sorry - meant denormalize ( I always switch that up)
[18:51:42] <Epichero> i've been trying to figure this out for days