[00:16:31] <jaraco> ^ that's what I did, and it seemed to work.
[02:41:55] <Guest90> hello, I have a question about the usage of libmongc driver,when I update an none existence item with a "$inc" operator,the mongod return ok and the "nModified" is zero, while the driver responses me with success,is it correct?
[02:54:40] <xdotcommer> can I upsert with an $out from the aggregation framework?
[03:16:29] <Boomtime> "xdotcommer: can I upsert with an $out from the aggregation framework?"
[03:16:37] <Boomtime> what do you want to actually do?
[03:16:47] <xdotcommer> Boomtime: i realize that $out just replace the collection
[03:16:57] <xdotcommer> I was going to use it for aggregation
[03:17:09] <xdotcommer> also realized it cant update capped collection
[03:17:19] <Boomtime> hmm. i thought $out did an update, but i could be wrong
[03:17:37] <xdotcommer> nope according to docs it replaces existing collection if it exists
[03:20:25] <cheeser> yeah. no updates on $out (yet)
[03:20:58] <xdotcommer> trying to do something naughty... I want to use "new Date()" subtract 24 hours in seconds ... and use that as a limit of an aggregation search
[04:15:17] <xdotcommer> Boomtime: something like this
[04:24:06] <Boomtime> "{ "$subtract": {new Date(), 86400000 } }" <- this is a constant, why not just give the value you mean rather than make the database work it out?
[04:24:38] <Boomtime> the client can as easily calculate it as the server, but.. meh, i suppose
[06:13:48] <cheneydeng> hello, I have a question about the usage of libmongc driver,when I update an none existence item with a "$inc" operator,the mongod return ok and the "nModified" is zero, while the driver responses me with success,is it correct?
[06:18:37] <Boomtime> cheneydeng: what is the query portion?
[06:19:14] <Boomtime> "nModified" tells you how many documents were affected, your command ran fine (it successfully modified all matching documents, whcih was zero)
[06:20:40] <cheneydeng> so zero means update successful? I'm wondering how can i get an failure if the updated record number is zero?
[06:22:08] <Boomtime> cheneydeng: there is an ok field too
[06:24:01] <cheneydeng> @Boomtime yeah,I know that it seems the response is expected as normal.while I need a way to tell me no record is updated through the libmongc driver
[06:24:51] <cheneydeng> Boomtime: is there a way to achieve that or should i do a pull request to fix it if it is normal?
[06:26:29] <Boomtime> cheneydeng: you can tell the status of the command from the "ok" field, and you can tell how many documents were affected with the nModified field.. what don't you understand?
[06:27:43] <cheneydeng> Boomtime: I'm query the mongd through libmongc driver,this driver is a third-party code for me,you get it?The driver can know the status and nModified,while i'm not
[06:28:59] <Boomtime> do you mean this: http://api.mongodb.org/c/current/
[06:33:40] <Boomtime> if it returns false then something went wrong and command may not (or may have, who knows) made it to the server
[06:34:44] <cheneydeng> Boomtime: Oh,yes,it returns true.so you understand it?As i don't know if someone is the author of that lib,so i asked the question here.
[06:36:24] <Boomtime> right, so to find out what the command actually resulted in at the server (more than just "true" meaning whether it was executed correctly) then you need to call mongoc_collection_get_last_error
[07:22:46] <fatih> how can I distribute those 30,000 documents amongs 10 workers
[07:22:59] <kali> fatih: well, you have several options
[07:23:07] <fatih> that means each will have 3000 documents, and they will no race condition
[07:24:04] <kali> fatih: or at least 2 :) 1/ you can partition the collection beforehand and associate each part to one worker
[07:25:31] <kali> 2/ you can use a document field to act as a document lock: in its main loop, a worker will findAndModify a document that does not have yet the "magic" field and set it to something like "running"
[07:25:39] <kali> then process it, and finally set it to "done"
[07:26:16] <kali> if the findAndModify returns nothing, you exit the worker loop
[07:26:26] <fatih> kali I was thinking about the second approach, you mean all 10 should start to iterate over all collections and pick one that is free ?
[07:41:17] <kali> in the findAndModify, you look for unassigned jobs (locked_by:null) and done_at:{$lt:now()-5min}}
[07:41:20] <fatih> "Eventually I hit a point where Mongo did become a bottleneck (the consumer contention got too much) and I moved to a combination of Beanstalkd and MongoDB. MongoDB now holds the data and Beanstalkd holds the IDs in queue."
[07:41:55] <kali> fatih: i have a run a million user site with mongodb as a queue system. don't bloat the infrastructure for nothing. 30k docs is nothing.
[07:42:16] <fatih> you are right, really I don't want to over engineer too
[07:42:26] <fatih> I'm jusst asking because I'm curious and don't want the wrong thing
[07:43:02] <fatih> because an additional stack just causes problems too, that's why I asked here, to ask how to solve it
[07:43:23] <fatih> but thanks! i've got now some clues let me implement an prototype
[07:43:50] <kali> fatih: there is one more thing you may want to consider
[07:44:03] <kali> fatih: if a worker crash, it will keep the lock forever
[07:45:34] <kali> if that's of interest, one of my implementations lives here: https://github.com/kali/extropy/blob/master/core/src/main/scala/MongoLock.scala
[07:45:57] <fatih> ok actually all I need is, to use my existing assignee logic for queueing
[07:46:28] <fatih> > one is "locked_by" (or assignee) and the other will be "done_at"
[07:46:40] <fatih> I have currently: "assignee.name": and "assignee.time"
[07:46:50] <kali> yeah, if the done_at gets to old, you can assume a timeout
[07:47:04] <fatih> assignee.name is the current holder, and assignee.time is the time the worker started on that
[10:04:00] <T3h_N1k0_> Hi, I've got a pb with a mongo cluster, the Django application throw this error "OperationFailure: database error: socket exception [SEND_ERROR] for 10.5.8.83:27017"
[10:04:13] <T3h_N1k0_> but I can't find anything in the mongo node log
[10:53:56] <phaz3r> '\0' not allowed in key: \0...
[10:54:04] <phaz3r> wasat and how to avoid this error message?
[11:22:17] <rspijker_> phaz3r: null character is not allowed in keys, is what it looks like
[11:22:28] <rspijker_> you can avoid it by not putting that character in keys...
[11:52:22] <Hyperking> would mongodb be better than writting a REST api? both use json to structure data...not sure, but I want a slim stack
[11:53:36] <jordana> Hyperking, wouldn't you write a REST API on top of your MongoDB database?
[11:53:50] <jordana> Either way there'll be some processing needed
[11:53:50] <Hyperking> I developing a news website. Data will be dynamically feed into the site by either a json rest output or connect to a mongodb server.
[11:58:28] <Hyperking> jordana: Im not completely sure. is MongoDB serving data to a rest service and then to the client? Looking for something as slim as angularjs
[12:00:23] <jordana> Hyperking: Yes, like I said, MongoDB -> REST -> client
[12:00:36] <jordana> you're AngularJS app is the client in this instance
[12:07:25] <Hyperking> jordana: REST endpoints would be urls that output data? if so would this be the same as writting a php page that outputs formatted json like a html page
[12:43:14] <jordana> Hyperking: Yes. Your PHP endpoint would query your MongoDB and output the documents as JSON (probably with a little processing PHP side to convert things like dates etc)
[12:46:40] <Hyperking> thanks jordana, Is the REST endpoint needed? MongoDB -> client (php or angularjs)
[12:47:30] <Hyperking> it's a read only site with no CRUD operations
[12:47:51] <jordana> Hyperking, your angularJS would not directly query the database, it would request data from your API
[12:48:03] <jordana> so you need to have something that angular can talk to
[13:22:32] <mjburgess> hi. i have two collections a,b in are documents which have a field refering to a field in b, eg. in a {"x": "SOME-WORDS", "y": "DATA"} and in b {"x":"SOME-WORDS", "q": 1, "p": 2} i would like a result which replaces SOME-WORDS in a with the full document in b, ie. {"x": { document from b... }, "y"...}
[13:23:52] <mjburgess> hi. i have two collections A,B . In A documents have a field refering to a field in B, eg. in A {"x": "SOME-WORDS", "y": "DATA"} and in B {"x":"SOME-WORDS", "q": 1, "p": 2} i would like a result which replaces SOME-WORDS in A with the full document in B, ie. {"x": { document from B... }, "y"...}
[13:55:38] <Frozenfire_> Hello all. I was wondering about $limit for the aggregation pipeline. Is there a way to specify no limit give me all documents? http://docs.mongodb.org/manual/reference/operator/aggregation/limit/
[15:45:36] <__nesd> i wanted to try today the compact command and i was quite surprised when i saw that it actually made my collection take more place (see http://pastebin.com/rchjrv7W) did you ever heard of such a case or have a idea of why such a thing could happend?
[15:46:55] <__nesd> the indices seem to be much smaller (~50%) but the total size got bigger (~20%)
[15:57:57] <asd3syd2> i need to store about 100k vocabularies - is mongodb up for the task and does it offer an equivalent to SQL LIKE queries?
[15:58:52] <cheeser> mongodb has a query language, yes...
[17:00:51] <s2013> anyone here used dynamodb? copmared to mongo?
[17:26:30] <oblio> what's the best way to allay two operations being performed on a document at the same time
[17:26:56] <oblio> e.g. im playing with an API that uses mongo and 2 updates come in for the same document, and the document is locked by one update so the other fails
[17:28:55] <oblio> i'm guessing this is specifically why people will hybridize with mongo and a sql database, but i guess i'm just wondering if i'm missing something with terminology, if there is a way to have mongo wait for one of the operations to complete and queue the other
[17:40:38] <oblio> jordana: well, it goes a little deeper. i guess i didn't have a full scope of it.
[17:43:48] <jordana> oblio: I can't really tell from what you've said but if you need to perform this as a transaction from the sounds of it an application level message queue might be better suited
[19:11:29] <wc-> hi all, im using the native mongodb driver for node, i cant seem to get an aggregate query to respect my cursor batchSize
[19:11:37] <wc-> i always see batchSize: 1 in the mongod log
[19:12:10] <wc-> im creating the cursor with the aggregate command, then calling cursor.get(err, results) then iterating on each result
[19:12:26] <wc-> has anyone ever seen this before?
[19:13:43] <wc-> this is killing performance for me, been banging my head against it pretty hard
[19:34:05] <wc-> anyone know if this is an issue worth sending an email to the nodejs mongo driver mailing list about?
[19:45:12] <derek-g> so if I need to search for some documents in mongodb by a field - what exactly is happening behind the doors ? Does mongo scans entire collection?
[19:53:22] <whaley> derek-g: without an index in place, yes
[19:54:42] <derek-g> whaley so if I have ~40000 json document i want to store and search by headword field - is search gonna be fast?
[19:55:13] <derek-g> whaley, (given I have an index on that field).
[19:57:03] <whaley> derek-g: that depends entirely on your definition of fast..
[19:57:34] <whaley> derek-g: but for reads it will be more performant than without an index
[19:59:26] <derek-g> whaley, honestly it's just dictionary data that will be used by a handful of people editing entries via PHP frontend.
[19:59:57] <derek-g> whaley, im just thinking about frontend searching by headword, sorting etc.
[20:00:05] <whaley> derek-g: measure first before optimizing
[20:04:25] <whaley> derek-g: btw, I wouldn't worry much about 40k documents unless it's growing rapidly... my production system has a collection whose .count() function just returned 843,800,975 for me :)