[00:02:37] <damoncasale> StephenLynx: Reading over what I can find on MongoDB performance vs. Mongoose performance. It seems like Mongoose can still work, but one would need to be careful to use it correctly.
[00:03:25] <damoncasale> For instance, one of the slowdowns appears to be the validation prior to writing out a record. Mongoose provides a bare query method, tho.
[00:07:23] <StephenLynx> Mongoose assigns each of your schemas an id virtual getter by default which returns the documents _id field cast to a string, or in the case of ObjectIds, its hexString.
[06:01:05] <pEYEd> how would I query all of the usernames from this collection? http://paste.ee/p/o9xGf it's inside the subarray of 'data' and I can't get to it. :(
[07:25:45] <preaction> SQL is a data store? i thought it was a structured query language
[09:01:18] <jebis> Hello, is there a way in php to make a ordered insert of multiple documents? u know i want to have them in a ordered manner http://pastebin.com/EmwSsztD i dont know it is allways mixed even when i fetch them with sort "create_date" : -1
[09:14:16] <coudenysj> jebis: can't you just use the "natural" order ?
[09:21:58] <jebis> coudenysj this natural works but why does not work a normall sort by create date ? i mean is mongo so fast that it creates 2 same dates ?
[09:22:46] <coudenysj> jebis: look at http://docs.mongodb.org/manual/reference/object-id/
[09:24:05] <coudenysj> the first part of the id is the time, but there are other values needed (mostly to solve multi server problems), but you can insert multiple docs in a given msec
[09:33:02] <jebis> coudenysj and $natural uses that part of the id ? or how it works if i can ask :P
[11:07:36] <agis-> Isn't something wrong with my 2.4 server logs? https://gist.github.skroutz.gr/agis/212d40ee53e5ac7e3006
[11:07:48] <agis-> (I don't have any authentication enabled)
[11:11:32] <mprelude> Hi, sorry if this seems like a noob question. I'm updating one of the mongo drivers to use SCRAM-SHA1. I was wondering the easiest way to get into a communication with the server so I can try throwing requests at it and look at the responses.
[12:30:28] <mprelude> cheeser or Derick (or anyone else): If you can help me get pointed in the right direction to do a mongo auth over telnet or similar, that'd be great. I can't seem to find any tcp-level documentation.
[13:52:18] <corentin> #any clue how to remove all documents from a capped collection?
[13:55:36] <corentin> I've seen about {emtpycapped: "collection"} but it seems to only be a test command
[14:05:42] <Derick> mprelude: let me see if I can find something
[14:06:21] <mprelude> Derick: The main thing I'm struggling with is just how to format my requests to the server. Usually I do this kind of stuff over telnet so that I understand what i'm implementing, then write them in code.
[14:06:56] <Derick> mprelude: this is what PHP does: https://github.com/mongodb/mongo-php-driver/pull/730/files
[14:08:34] <Derick> it implements http://tools.ietf.org/html/rfc5802
[14:08:52] <Derick> but not sure if you can do it with telnet, as it requires binary info to talk to mongodb
[14:09:41] <Derick> mprelude: and https://github.com/mongodb/specifications/tree/master/source/auth is our spec for it
[14:12:27] <Derick> https://github.com/mongodb/specifications/blob/master/source/auth/auth.rst#scram-sha-1 to be about scram-sha-1
[14:15:08] <crispy_beef> Hi. Wondered if somebody here could shed some light on why we are getting the error message "no primary server available" now and then when connecting to our replica set?
[16:13:20] <corentin> cheeser: I think it's mostly useful when using mongodb for web stuff, such as django or so
[16:17:08] <cheeser> "web stuff" doesn't really imply the need for this kind of feature, though.
[16:19:59] <corentin> cheeser: well in our case we have limited hard disk space, so we have to use capped collections and rotate the documents in our collection. This means when we display a document to a user, the document might have expired if/when the user request more data about this document
[16:20:21] <corentin> our solution was to copy the current collection to another collection with a name containing the user session ID
[16:21:19] <corentin> when the user session expires, we want to delete this tmp collection
[16:23:41] <StephenLynx> if you are using dynamic collection creation
[16:23:53] <StephenLynx> your system is beyond salvation.
[17:00:15] <synthmeat> if you know of a more convenient workaround that, i'd be glad to refactor my stuff, since there's a lot of this ugly stuff around
[17:00:25] <StephenLynx> in the end you are passing the same object.
[17:00:51] <StephenLynx> the issue you have probably its because you don't use $set
[17:00:53] <synthmeat> no, there's in document alread stuff under facebook., and if i fill up facebook object and pass it as update, what was there disappears
[17:01:10] <StephenLynx> so it overwrites the whole thing
[17:07:19] <StephenLynx> Mongoose assigns each of your schemas an id virtual getter by default which returns the documents _id field cast to a string, or in the case of ObjectIds, its hexString.
[18:14:26] <synthmeat> that's helpful still, thank you
[18:20:34] <flok420> in an aggregate I want to group records by their timestamp per hour. timestamp in seconds so I thought I just do ts - (ts % 3600). Am I right that that should be written like this: $group: { '_id': { name : "$name", ts : { $subtract : [ "$ts", { $mod : [ "$ts", 3600 ] } ] } }
[18:20:56] <flok420> because mongodb shell says "SyntaxError: Unexpected string" (with no hints where this faultive string is)
[18:21:57] <StephenLynx> you put the name along in the _id
[18:22:12] <StephenLynx> so it will always be different even if the part relating to time is the same
[18:22:25] <StephenLynx> and are you storing dates as actual dates?
[18:22:28] <StephenLynx> that would be the optimal.
[18:23:07] <flok420> my bad I meant that i want to group on the name-field together with the timestamp. yes, storing names together with epoch-times (seconds between now and 1970)
[18:23:55] <synthmeat> StephenLynx: fyi, yes, it was me. query value was undefined. not sure why it acted like it found something and created still, but that was it
[18:25:14] <StephenLynx> that mongo tries to just parse the javascript of the query.
[18:25:31] <StephenLynx> it can't tell you very well where the error is because of how json is formatted.
[18:25:39] <StephenLynx> try formatting it somewhere
[18:25:51] <StephenLynx> that should allow you to see better where you made a mistake.
[18:27:45] <flok420> oh hang on it indeed has nothing to do with that maths I added. if I remove it (and only group on name) it still complains about a string! ok thanks sofar, I now have something to work with
[19:31:38] <najah> Hi all, I'm a young sysadmin and I use mongodb with docker. I would like to know things about journal in mongodb. I would like to know what are pros and cons about journal in mongodb for a little application. It is working like RDBMS ? I mean... Can we start to a special point in the journal before error humain ? Thanks very much
[19:33:42] <treta> I'm getting this message all the time, it's not fun anymore "Caused by: com.mongodb.MongoTimeoutException: Timed out while waiting for a server that matches AnyServerSelector{} after 1000 ms"
[20:10:40] <frenchiie> might someone be able to help me with this question? http://stackoverflow.com/questions/32444803/pymongo-parameters-of-function-find-one-and-update
[20:53:58] <sqram> Is it possible to prepend a value to a document's field if it's an array?
[20:56:05] <sqram> for instance, if i have a document like {id:4, children: [4,3] } ... can i update the values of children in a single update command? or do i have to findOne({id:4}), alter the chuldren array, and then update
[21:00:15] <StephenLynx> you can update elements in arrays.
[21:01:25] <sqram> now going a step futher..say i want to update multiple documents - determined by their unique ids. is it possible?
[21:02:29] <sqram> for instance, toupdate = [3,7,8,5] <-- these would be id fields of different documents. must i run a loop to update each one, or can a single update call do it?
[21:02:51] <sqram> i di dsee multi: true in the docs, but seems like it would have to be a field value shared among the documents. these are unique
[21:48:35] <Derick> i believe we have a difference to the RFC...
[21:49:22] <Derick> mprelude: pasting that in a PM, if you don't mind?
[21:49:29] <mprelude> Literally the only part I'm really stuck on is what format these hashes should be on. I have to do logical XORs and re-hashes, which could be on binary, decimal, octets, hex idk.