[00:23:57] <jgornick> StephenLynx: I finally found a solution to what I was trying to do: https://gist.github.com/jgornick/eace131a44a2b594d51f1b3d9f94e6ca
[00:26:28] <jgornick> Because $pull is pulling items from the history array that match a condition.
[00:26:38] <jgornick> Which it's pulling all items that match that condition.
[00:45:29] <klaxa> hi, i must be stupid: https://gist.github.com/klaxa/04cb745b9955ab45a85464f19cc8da3d
[00:45:41] <klaxa> i'm getting duplicate key errors for non-duplicate keys?
[00:46:26] <klaxa> the first file is demonstrating how i get the duplicated keys error, the second file is the index i created and the third file is the pythoncode (pymongo) i used to create said index (not that it should matter really)
[01:03:19] <Boomtime> mongodb doesn't care what the value is, it just indexes whatever is there - when you say 'text' you are asking mongodb to explicitly inspect the type and deconstruct it as a tokenized string field
[01:05:02] <klaxa> yeah, i get it now, text is a special case of string (which is just a special case of data)
[01:48:30] <sector_0> I have a database that stores user information, and I have need for a user id (or something to that effect), should I use the _id field for this, rather than make a separate userID field?
[01:49:31] <sector_0> are there any reasons not to do this?
[01:49:52] <sector_0> ..keeping in mind this field gets exposed to the client
[01:51:21] <StephenLynx> if you are fine with the _id format
[08:30:05] <shayla> Hi guys. I've got the following query : http://pastebin.com/BpR1GNPu What I need is to make the count ($sum : 1) only for field that respect some $match condition but I don't want to add it on the general $match to the query but just for the count. Is there a way to do it on the same query or I need to do two distinct query?
[09:18:28] <alias_> Hi is there any way to have mongo log in gelf or json format?
[09:44:58] <jelle> I have an issue with showing some stats data to customers, the initial query is slow, but doing it the second time it's super fast. Any tips how I can make the initial query faster? (I'm using a hdd backend mongodb)
[09:45:23] <jelle> I havne't checked if the first query has a lot of page faults but I guess it does
[10:03:27] <jelle> nevermind, seems I did set some wrong indexes
[11:46:37] <kknight> I have made an app . Now I want to make a new collection for a model .. how to do that?
[13:23:21] <shayla> Hi guys, i'm having a problem working with mongodb. This is what I do in mongodb family: {$sum: { $cond: [ { $eq: ["$content.products.product", 99] }, 1, 0 ] } } . I need to do that query in php so i do something like 'family' => ['$sum' => ['$cond' => ['$eq' => ['$content.products.product', 99]], 1, 0]]
[13:37:27] <jayjo_> I have a field in my document that is a UTC timestamp in seconds (since the epoch). Is there a way to interact with dates in a query document, or do I have find what time in seconds is certain date marks and go from there? For a concrete example, to get all records from the last day or something similar
[13:41:27] <StephenLynx> store as an actual date object.
[13:41:34] <StephenLynx> instead of a string representing a date.
[13:44:06] <jayjo_> This data is from a provider that exports to json. Is there a way to have it automatically convert? Or do I run something periodically to convert it?
[13:47:33] <jayjo_> But at least the field is an integer
[13:49:16] <StephenLynx> that depends on your driver.
[13:49:33] <StephenLynx> with the node.js driver you just give it a date object that the driver handles the conversion.
[14:08:12] <m1dnight_> when I do db.collection.find().sort(..);, does that do the sort in the db engine, or does it first fetch the entire table and then sort in memory?(outside of the db)
[14:32:02] <kurushiyama> m1dnight_ That depends on your setup and your indices. (and order in which you give the sort params)
[14:54:26] <jayjo_> So, the data is already inserted using mongoimport, and the _t field is an integer. I can use .forEach() in the mongo console. Is there a way to automate this within mongo or do I just set a cronjob to execute periodically? Like in sql I could use triggers or view functions - is there an analog?
[14:54:50] <jayjo_> to be clear- still trying to get an integer into ISODate()
[14:59:21] <StephenLynx> you will have to handle that on application code.
[15:10:00] <jayjo_> Can I use anonymous functions in the javascript shell? liek this: { _t : { $gt: function() { var d = new Date(); d.setDate(d.getDate() - 2); return d.getTime(); } } }
[15:59:55] <scmp> Hi, what does it mean when explain() returns a winningPlan with "EOF" ?
[16:05:29] <scmp> end of what stream? the docs could be a little bit more specific. And stream is not mentioned anywhere else.
[16:06:05] <scmp> it's a large collection, a simple query if field a exists and field does not exists, and a combined index on both fileds, not sparse.
[16:06:12] <hardwire> look at what the false description says
[16:11:04] <hardwire> The docs DO cover it. But I think you may want the functionality to meet some expectations that aren't described.. and I'm not sure there's more to say about that flag.
[16:12:17] <scmp> well, there is no limit() on that query
[16:13:27] <scmp> what i expect is that winningPlan mentions the index and i don't understand why it's not able to select the index
[16:15:27] <scmp> does mongo have an default limit() ?
[16:15:57] <scmp> it is a large collection, but it's also a very simple query and index
[16:21:26] <idioglossia> StephenLynx, he means if the collection is super big, does mongo limit the query to return, say, the first N items (where N is extremely large but not infinite) instead of returning all of them
[19:20:51] <t3chguy> Hello, I'm just getting acquainted to Mongo, and am wondering whether I can use db.collection.save with $currentDate to be able to track a modifiedDate value with the document;- Cheers
[19:20:59] <StephenLynx> so you push add it and that's it.
[19:21:09] <StephenLynx> if you want it sorted, you will have to do it on application code.
[19:21:28] <jgornick> StephenLynx: Yeah, that's what I'll have to do because there are situations where I want it sorted.
[19:21:49] <StephenLynx> had the same thing one time.
[19:22:36] <StephenLynx> t3chguy, that sounds a lot like a standard upsert.
[19:26:45] <t3chguy> StephenLynx: so would I just shove the whole document as the $set and its _id as the query?
[19:29:40] <StephenLynx> hm, heres the tricky part
[19:29:54] <StephenLynx> for you to use an _id you would have to generate an ObjectId
[19:30:06] <StephenLynx> and no, you can also use $setOnInsert
[19:47:12] <t3chguy> StephenLynx: all my documents are pre-existing, so wouldn't even need to upsert, I have a task that needs to go through each and update some, thus .save would have been ideal for its syntax of just being the document itself but I also need to update the modifiedDate
[19:52:23] <sequence> Hey channel, is there an existing solution for resilience in the face of a replica set primary being stepped down? Something like a smart driver that will reconnect and reissue queries, or a MITM proxy (like dvara)?
[19:55:26] <GothAlice> sequence: AFIK every official driver already does this.
[19:56:37] <sequence> GothAlice: looking at something like https://emptysqua.re/blog/save-the-monkey-reliably-writing-to-mongodb/, it seems like reconnecting and reissuing is something the (Python in this case) client has to take care of.
[19:57:10] <sequence> Even though that is quite old, I'm still seeing Exceptions with our use of the Java driver (3.2.2)
[19:57:51] <sequence> `rs.stepDown()` triggers a `com.mongodb.MongoSocketReadException: Prematurely reached end of stream`
[19:58:26] <GothAlice> sequence: http://api.mongodb.com/python/current/api/pymongo/mongo_client.html < note the paragraph under the MongoClient() constructor section at the top starting with "The client object is thread-safe and has connection-pooling built in."
[19:58:56] <GothAlice> You *do not* want the driver automatically re-trying every query. The results could be disastrous.
[19:59:39] <Derick> GothAlice: no driver should attempt to re-issue a write when a connection breaks down, or a new node gets elected as primary
[20:01:33] <GothAlice> Derick: Ah, found it. Indeed, I was referring primarily to the "reconnect" part of the query, not the "reissue" part.
[20:03:50] <GothAlice> sequence: At work we divide our queries into a matrix of risk. One dimension revolves around write concern, another around pattern of retry. From "no need to retry, failure is OK" through "retry is trivial, just repeat the operation" (a la $set), to "yowza, we need to run some custom code to determine what to do next".
[20:04:33] <GothAlice> (Such as checking a document version number and calculating a new atomic operation to "catch up".)
[20:16:40] <tesjo> Is there a db query called offset.Could not find any on doc..But I dont get error when using offset.I am using mongoid
[20:37:57] <pokEarl> do you lose the ordering of documents or something when you do a compressed mongodump?
[20:40:38] <pokEarl> have some script i'm running to generate test data and supposedly it generates the test data into a database, then creates a dump of that test data and uploads it somewhere. When I run tests against the generated database they all pass but when i download the uploaded dump restore it and run against them they fail, but guess its something else weird thats going on then :(
[20:48:42] <pokEarl> does something else change then? just ran tests against a database everything passes, then did manual mongodump, dropped the database and did a mongorestore and now they fail
[20:49:51] <pokEarl> http://pastebin.com/J7y5qfL9 the dump/restore commands
[20:52:18] <pokEarl> ah theres a --maintainInsertionOrder maybe that helps
[21:16:11] <charnel> how can I set created_at to created_at.dd ?