[02:18:03] <topwobble> how can I tell if a secondary is up to date with the primary? I just had it offline for 3 hrs and need to know when it is synced
[02:19:10] <topwobble> it seems like it is serving read requests, which I am worried about since i know it cant be up to date
[04:06:05] <cphoover> Hello I'm working with the node.js mongo driver. I am operating on a cursor. What I want to do is skip an arbitrary number of documents in a collection then read the next object and then skip a number of documents then read the next object... When I attempt to do this I get the error "MongoError: Cursor is closed" maybe I'm misunderstanding 'skip' but can it not be used after the cursor has been moved?
[04:11:45] <Boomtime> cphoover: yeah, that isn't very clear but skip() is actually part of the command that went to the server - you only get to use it at the start, once you start consuming the results, that method is not available
[04:12:03] <pylua> cheeser: I only need one doc returned at each find
[04:12:10] <Jonno_FTW> I have a problem, I can connect to a local instance of mongodb, but I can't connect to it remotely even though I used the same command line params
[04:12:19] <Boomtime> treat skip() as an addendum to the query (because it is), you can only issue it once
[04:13:28] <Boomtime> heh, yeah :p , 'auth failure' not connect failure... might be the localhost exception which cares not about auth attempts with no users present
[04:13:31] <Jonno_FTW> I only have security.auth enabled in the config
[04:14:33] <Boomtime> and remotely, with the mongo shell and the same command line?
[04:14:33] <pylua> Jonno_FTW:how does the mongo log sayes
[04:15:00] <Jonno_FTW> Failed to authenticate penny@pennyParking with mechanism MONGODB-CR: AuthenticationFailed MONGODB-CR credentials missing in the user document
[07:21:26] <nickenchuggets> having a problem with mongodb, I gathered some details from syslog, https://gist.github.com/brlafreniere/cd7b291ac9ece484e48f
[07:24:19] <nickenchuggets> oh, well, I may have discovered the problem... I'm on Debian 8 :P
[07:25:36] <nickenchuggets> that... kinda sucks, the apt repo that mongodb provides appears to be compatible only with Debian 7
[07:34:59] <nickenchuggets> is mongodb 2.4.10 horribly out of date?
[07:37:17] <nickenchuggets> I tried to install mongodb via the 3rd party apt repos, which is said to be for debian 7 wheezy, yet I'm using debian 8 jessie... and I'm getting all kinds of errors and stuff, so I'm guessing that's not gonna work...
[07:39:00] <nickenchuggets> alright, so I downloaded the binaries instead
[08:01:54] <tejasmanohar> normal insert ok? coudenysj
[08:02:29] <coudenysj> you can just insert a value you want
[08:02:39] <coudenysj> if you do not, mongo will create one for you
[08:03:43] <Boomtime> tejasmanohar: what is not recommended? if you have a value in your document structure that is unique and already exists, you should totally use it as the _id field
[08:04:04] <Boomtime> _id is populated for you, only if you do not supply one
[08:04:44] <Boomtime> if you don't supply it (or use the value that is supplied for you), then there's a whole indexed field, of 12 bytes per document that exists for no reason
[08:04:59] <nickenchuggets> yay, I solved my issue
[08:06:28] <the_voice_> users.update({column6: { $elemMatch: { _id: id } } }, {$set: { "friends.$.name" : name } })
[08:07:13] <the_voice_> I am new to mongo and trying to understnad if searching a subdocument like that would have any performance implications
[08:08:12] <Boomtime> the_voice_: nothing is slow if it has a covering index, the further from being covered by an index that a query gets, the slower it gets
[08:16:35] <the_voice_> Boomtime you spoke about indexing previously
[08:17:01] <the_voice_> if you index an array hten it automatically indexes both columns of the subdocument or you index both the array and the _id part of the subdocument
[08:19:17] <the_voice_> as one of the columns, but Iw ould want to index that array and be able to search by _id
[08:19:36] <the_voice_> I haven't yet I am working in dev so none of my tables have more than 10 entries right now :) I want to know how I should index that
[08:20:19] <Boomtime> if you want to search by the _id field inside the subdocuments, you need to index that -> ensureIndex( { "friends._id" : 1 } )
[08:47:49] <synthmeat> (i'm trying to predict my memory requirements for mongo, presuming known concurrent load and document sizes)
[08:58:29] <synthmeat> ok, the numbers are B, found that. and objects are indeed documents
[09:18:50] <iszak> How do I explain an aggregate in Mongo 2.4? .aggregate().explain() doesn't work
[11:54:14] <deathanchor> iszak: why are you explaining an aggregation?
[12:06:58] <iszak> deathanchor: so I know where the bottleneck is
[12:10:53] <the_voice_> Is there a way to set a value of a field in a document as the result of an aggregate of one of the subdcuments?
[12:12:36] <the_voice_> so for instance if I had a document user= {_id:1, name: bob, friends: [{_id:2, name:dave},{_id:3, name:jo}]}
[12:13:14] <the_voice_> is there a way I could do an update that would create a third column from the subdocuments that was called friendsList: "dave,jo"
[12:13:20] <StephenLynx> I wouldn't use _id in subdocuments.
[12:13:36] <the_voice_> In this case I need it because I am referencing another document
[12:13:38] <StephenLynx> it would be confusing in the application code.
[12:13:56] <StephenLynx> since _id from documents has special behavior
[12:14:38] <StephenLynx> I would make it more explicit.
[12:14:44] <StephenLynx> but w/e floats your goat.
[12:15:00] <StephenLynx> "is there a way I could do an update that would create a third column from the subdocuments that was called friendsList: "dave,jo"'
[12:23:12] <StephenLynx> what I actually do is to have a displayable name as its unique id.
[12:24:03] <StephenLynx> so I don't have to do even that. But I reckon that is not a viable solution when you may have multiple objects with the same readable name.
[12:26:44] <adsf> is this a normal way to do mutliple in's?
[13:05:23] <pylua> cheeser:Actually,I mean how do I create a ISO date with python datetime ?
[13:05:47] <pylua> or what the diff of ISO date and datetime ?
[13:06:00] <cheeser> i'm not sure (i'm not a python guy) but you should just be able to put a python date in your document and the driver should convert it for you
[13:07:13] <pylua> cheeser:I usually need to do query at mongo shell command line ,so I need to use ISO date instead
[13:07:49] <pylua> but mongo shell can not recgnize datetime
[13:07:59] <cheeser> in java, we just use java.util.Date and the driver converts. pymongo should do the same.
[13:16:08] <pylua> what is the format of mongo;s iso date?
[13:27:51] <jecran> Hi guys. How can I wait for all my mongo processes to finish so I can exit my program? It currently just hangs, and I dont have an option to manually close, because I have no idea when its done lol
[13:28:35] <jecran> I have read that node closes the mongodb connections manually. But mine are not closing, and my node application does not exit when done
[13:29:24] <jecran> how about just mongo, is there maybe something that can tell me when the connections are closed, or when they are done connecting online?
[13:41:49] <jecran> 2 ---- the tricky one . I can tell when I am done reading the rss feeds, at which point mongodb is still a few records back, and I don't know how to tell when its down
[13:43:10] <jecran> I use wtfnode currently, which lets me know that I have the connections still open. Handy little tool
[13:43:15] <basiclaser> I'm making a dictionary web application. It serves a list of words, based on a string search, and the word definitions along with synonyms. I want the user to be able to request 'tiger' and then also recieve 50 words before and after that word, alphabetically. The user can request two different versions of the dictionary, one with certain words removed.
[13:43:16] <basiclaser> Which, if any, database and distribution system could/should I use in the backend express app to store and distribute the dictionaries? Is there an optimal way of loading subsections of a list of words like this? I currently have the words stored in files in javascript arrays.
[13:43:44] <StephenLynx> then I would have all my incoming requests and outgoing responses funnel through a single place, put a simple counter of active requests, if the flag of shut down is true and no more pending requests are pending, shut the worker down.
[13:44:37] <StephenLynx> there is even an event for when a requests is completed.
[13:44:53] <StephenLynx> one for when it is killed before response and another for when you finish it.
[13:54:52] <jecran> StephenLynx: my simple solution is: check the error, if the error tells me the sockets are closed, reopen the connection. Not efficient, but no counting and only used on the last few results
[14:02:00] <jecran> StephenLynx: worked like a charm! 100% results as expected
[14:21:46] <puppeh> does this info still stands for the version 2 ruby driver? https://github.com/mongodb/mongo-ruby-driver/wiki/Replica-Sets#recovery
[14:49:14] <makzen> Hello, I have a last_active date and a created date in my collection documents. I want to find all entries where last_active and created are not on the same day. Can somebody give me a hint?
[16:37:15] <lmatteis> hi all. i have inherited a codebase which seems to run aggregate functions every day or so
[16:37:31] <lmatteis> i'm wondering though, can i have mongodb generate them automatically for me?
[16:37:46] <lmatteis> like for example in couchdb, all you need is the view, and it updates automatically when the data changes
[16:42:11] <cheeser> there's no scheduler built in to mongodb
[16:43:21] <joelkelly> anyone have any experience with populate on mongoose? Trying to figure out if it needs to be established when the data is passed in or if it can be used on existing documents
[17:05:15] <symbol> I'm still trying to wrap my head around proper use cases and schema design in Mongo...it seems to be that Mongo would not be a good choice for something like twitter, correct?
[17:10:28] <StephenLynx> it would grow really large if you were to put in an embedded arra
[17:10:38] <StephenLynx> in general, yeah, mongo wouldn't fit it very well.
[17:10:55] <StephenLynx> but is not the worst scenario, either :v
[17:11:19] <symbol> I feel like every resource I've been studying talks about mongo and its speed with big data...so it seems that it's mostly good with big data that can be embedded.
[18:24:00] <ab2qik> StephenLynx: Perfect, it works, thks!
[18:48:17] <a|i> in a one-to-many relationship where an array of references to ‘many’ is stored on the ‘one’ side, how would pagination work? keep deserializing the reference array from db and pick the next page ids?
[18:56:47] <a|i> I mean some of the replicas are not updated during the query time and querying for a parent may not return all children.
[18:58:42] <StephenLynx> I am not experienced with replicas and clusters, but yes, if it takes some time for that change to be replicated on the children, that would be a possibility.
[18:59:08] <StephenLynx> someone that knows that better might confirm or deny that.
[19:15:59] <EXetoC> it's very popular is all I can say, but you've probably noticed already :p
[19:16:14] <ab2qik> StephenLynx: Yes this - var db = mongoclient.db('course');
[19:17:26] <ab2qik> StephenLynx: ok, trying without expressCrap!
[19:27:15] <akoustik> unimportant question: anyone know why the "old" config format is the one used in /etc/mongod.conf in fresh installs, instead of the yaml format? backwards compatibility?
[19:37:10] <cheeser> probably depends on the packager
[19:38:31] <akoustik> probably. just interesting that i've seen it in 4 different distros. (don't ask why i have to use 4 different distros, it's a sad story :)
[19:41:10] <EXetoC> so the server sometimes replies with the key "ok" where the value is of type double instead of int? odd
[20:06:35] <dbclk_> hey guys..need some assistance on an issue i'm trying to figure out with my mongo clusters -> http://pastie.org/10278260
[20:08:21] <akoustik> dbclk_: what exactly is the issue? can't get a replica set running at all, or what?
[20:40:06] <BeaverP2> what is the irc chat for talking to some devs?
[20:45:09] <acidjazz> hey all.. lets say in a document i have a key=>value object, say { objs: { ornage: {}, black: {}, blue: {} } }, what would be the ideal query to pull documents where objs.orange exist?
[21:46:22] <akoustik> check out the various query operators
[21:58:17] <greyTEO> GothAlice, I am trying to startup a worker.py and an receiving this error "got an unexpected keyword argument 'slave_ok'". Any hints?
[21:58:27] <greyTEO> Could that be a version issue with pymongo?
[21:59:24] <akoustik> greyTEO: are you trying to set slave_okay=True?
[22:03:50] <GothAlice> greyTEO: Yeah, sounds like you're running the latest Pymongo. You need to pin your dependency <3.0, currently, until MongoEngine is updated.