PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 22nd of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:16:31] <jaraco> ^ that's what I did, and it seemed to work.
[02:41:55] <Guest90> hello, I have a question about the usage of libmongc driver,when I update an none existence item with a "$inc" operator,the mongod return ok and the "nModified" is zero, while the driver responses me with success,is it correct?
[02:54:40] <xdotcommer> can I upsert with an $out from the aggregation framework?
[03:16:29] <Boomtime> "xdotcommer: can I upsert with an $out from the aggregation framework?"
[03:16:37] <Boomtime> what do you want to actually do?
[03:16:47] <xdotcommer> Boomtime: i realize that $out just replace the collection
[03:16:57] <xdotcommer> I was going to use it for aggregation
[03:17:09] <xdotcommer> also realized it cant update capped collection
[03:17:19] <Boomtime> hmm. i thought $out did an update, but i could be wrong
[03:17:37] <xdotcommer> nope according to docs it replaces existing collection if it exists
[03:17:45] <Boomtime> bugger
[03:20:25] <cheeser> yeah. no updates on $out (yet)
[03:20:58] <xdotcommer> trying to do something naughty... I want to use "new Date()" subtract 24 hours in seconds ... and use that as a limit of an aggregation search
[03:21:17] <xdotcommer> useful for 24 hour stats
[03:21:38] <Boomtime> map-reduce "out" is update, i think that's what i was confused by
[03:23:52] <Boomtime> xdotcommer: why is that naughty? by limit, do you mean via a "$match greater-than yesterday"?
[03:38:30] <talntid> could not obtain connection within 5.0 seconds. The max pool size is currently 1; consider increasing the pool size or timeout.
[03:38:37] <talntid> where do I change this setting at?
[03:38:49] <talntid> using ruby
[03:38:57] <talntid> mongoid
[04:15:12] <xdotcommer> db.hourly.aggregate(
[04:15:12] <xdotcommer> {$match: minute: {'$gte' : { "$subtract": {new Date(), 86400000 } } } ,
[04:15:17] <xdotcommer> Boomtime: something like this
[04:24:06] <Boomtime> "{ "$subtract": {new Date(), 86400000 } }" <- this is a constant, why not just give the value you mean rather than make the database work it out?
[04:24:38] <Boomtime> the client can as easily calculate it as the server, but.. meh, i suppose
[04:30:44] <xdotcommer> Boomtime not sure
[04:30:48] <xdotcommer> {$match: {minute: {'$gt' : { "$subtract": [new Date(), 86400000] } } } },
[04:31:28] <xdotcommer> if there is a better way then great...
[04:31:36] <xdotcommer> this does not seem to work anyway
[04:31:48] <xdotcommer> probably because I am comparing a non MongoDate
[04:32:02] <xdotcommer> will have to make a new monogodate of the result first probably
[04:56:04] <xdotcommer> so many oddities exception: Unrecognized pipeline stage name: 'allowDiskUse
[04:56:45] <xdotcommer> 2.6.3
[06:13:48] <cheneydeng> hello, I have a question about the usage of libmongc driver,when I update an none existence item with a "$inc" operator,the mongod return ok and the "nModified" is zero, while the driver responses me with success,is it correct?
[06:18:37] <Boomtime> cheneydeng: what is the query portion?
[06:19:14] <Boomtime> "nModified" tells you how many documents were affected, your command ran fine (it successfully modified all matching documents, whcih was zero)
[06:20:40] <cheneydeng> so zero means update successful? I'm wondering how can i get an failure if the updated record number is zero?
[06:21:30] <xdotcommer> Unrecognized pipeline stage name: 'allowDiskUse'
[06:21:40] <xdotcommer> its not getting passed
[06:22:08] <Boomtime> cheneydeng: there is an ok field too
[06:24:01] <cheneydeng> @Boomtime yeah,I know that it seems the response is expected as normal.while I need a way to tell me no record is updated through the libmongc driver
[06:24:51] <cheneydeng> Boomtime: is there a way to achieve that or should i do a pull request to fix it if it is normal?
[06:26:29] <Boomtime> cheneydeng: you can tell the status of the command from the "ok" field, and you can tell how many documents were affected with the nModified field.. what don't you understand?
[06:27:43] <cheneydeng> Boomtime: I'm query the mongd through libmongc driver,this driver is a third-party code for me,you get it?The driver can know the status and nModified,while i'm not
[06:28:59] <Boomtime> do you mean this: http://api.mongodb.org/c/current/
[06:29:03] <Boomtime> or something else?
[06:29:46] <cheneydeng> Boomtime: yes,that's it.
[06:32:08] <Boomtime> ok, and you call mongoc_collection_update?
[06:33:10] <cheneydeng> Boomtime: yes,and this API just return 0 in this situation,and i can't tell if it updated some records or nothing did.
[06:33:23] <Boomtime> you mean it returns true
[06:33:40] <Boomtime> if it returns false then something went wrong and command may not (or may have, who knows) made it to the server
[06:34:44] <cheneydeng> Boomtime: Oh,yes,it returns true.so you understand it?As i don't know if someone is the author of that lib,so i asked the question here.
[06:36:24] <Boomtime> right, so to find out what the command actually resulted in at the server (more than just "true" meaning whether it was executed correctly) then you need to call mongoc_collection_get_last_error
[06:36:38] <Boomtime> http://api.mongodb.org/c/current/mongoc_collection_get_last_error.html
[06:36:57] <Boomtime> that will give you the last "write concern document" that was returned by the server
[06:37:09] <Boomtime> for the collection object you just used
[06:37:25] <Boomtime> you should only call that function after the update returns true
[06:37:51] <Boomtime> that will give you access to the nModified field which will tell you how many documents were affected
[06:38:03] <Boomtime> (among a plethora of other details)
[06:39:19] <cheneydeng> :Boomtime wow,great!
[06:39:50] <cheneydeng> Boomtime: thanks, let me test it
[06:41:05] <Boomtime> yeah, it's not real nice, mongodb is document-oriented which translates to objects in code, but objects in C are kind of clunky
[07:17:27] <cheneydeng> Boomtime: I tested it, it works fine now,thanks so much
[07:19:23] <fatih> hi
[07:20:13] <fatih> i have a collection with approx 30,000 documents. I need a way to iterate over them and do certain task.
[07:20:17] <fatih> the problem I have is
[07:20:28] <fatih> currently I'm running 10 workers each accessing this collection
[07:21:00] <fatih> when I put the logic to iterate over those documents in the worker, then each one will try to access it and iterate
[07:21:35] <fatih> that means duplicated work because once the iteration started there is a lot of side works to, like creating files, etc..
[07:21:55] <fatih> I was thinking of creating a lock that only one work could access at any given time
[07:22:27] <fatih> but the problem here is, the collection size is increasing and I have 10 workers which are capable of doing it
[07:22:32] <fatih> but only one is going to work on it
[07:22:35] <fatih> now my questions is
[07:22:46] <fatih> how can I distribute those 30,000 documents amongs 10 workers
[07:22:59] <kali> fatih: well, you have several options
[07:23:07] <fatih> that means each will have 3000 documents, and they will no race condition
[07:24:04] <kali> fatih: or at least 2 :) 1/ you can partition the collection beforehand and associate each part to one worker
[07:25:31] <kali> 2/ you can use a document field to act as a document lock: in its main loop, a worker will findAndModify a document that does not have yet the "magic" field and set it to something like "running"
[07:25:39] <kali> then process it, and finally set it to "done"
[07:26:16] <kali> if the findAndModify returns nothing, you exit the worker loop
[07:26:26] <fatih> kali I was thinking about the second approach, you mean all 10 should start to iterate over all collections and pick one that is free ?
[07:26:36] <fatih> *all documents
[07:26:50] <kali> yes
[07:26:57] <fatih> just is the data going to be distrubted evenly ?
[07:27:21] <fatih> what if one worker picks up 20,000 and the rest is distrubuted amongs the other 9
[07:27:55] <kali> well, each worker will pick one and start working on it. during this time the other will pick some
[07:28:08] <kali> it should be roughly even
[07:28:44] <fatih> hm
[07:28:46] <kali> you may want to setup an index on the "magic" field to avoid repetitive scan of the 30k docs
[07:29:21] <fatih> I have already a field like that
[07:29:24] <fatih> called "assignee"
[07:29:36] <fatih> which I'm using already with findAndModify
[07:30:26] <fatih> actually I need is a task queue, but the data is in mongodb
[07:30:37] <fatih> which makes it hard to put it into say rabbitmq
[07:31:18] <kali> you don't need rabbitmq there
[07:32:02] <kali> 30k docs, and it sounds like your jobs are on the heavy side. mongodb will be fine
[07:32:59] <fatih> I'm just thinking of evenly distributing the task, workers are going to access randomly
[07:33:04] <kali> https://blog.serverdensity.com/replacing-rabbitmq-with-mongodb/
[07:33:18] <fatih> the 10 worker might start the iteration in 5 sec interval
[07:33:28] <fatih> which makes the task distribution evenly hard
[07:33:38] <fatih> or I need to let them coordinate, that they start on the same time
[07:34:08] <kali> do you really need a perfect partition among workers ?
[07:34:10] <Boomtime> why does the distribution have to be perfectly even?
[07:34:14] <Boomtime> *snap*
[07:34:17] <kali> Boomtime: i was first.
[07:34:37] <fatih> they are going to have a network connection for each of those document
[07:34:45] <fatih> means If one just get more than the others
[07:34:53] <fatih> that box will exhaus network connections
[07:35:21] <fatih> I'v seen that before and It makes things hard, otherwise you are right
[07:35:35] <Boomtime> lolwut?
[07:35:37] <fatih> they can be differences, but was is the difference ?
[07:35:50] <kali> fatih: you mean each worker will work on several documents at the same time ?
[07:36:06] <fatih> kali: no because of the lock there will be no data race
[07:36:35] <fatih> kali: let me say it again, each worker will work at the same time, but they will not work on the *same* documents
[07:36:46] <fatih> what you described above fixes the data race which is nice
[07:36:54] <fatih> basically by using a field as a lock
[07:37:14] <fatih> let me explaing just one case:
[07:37:38] <fatih> say I have one document with the id 123, this was picked by worker 1 and the work on that is finished, the field is cleaned
[07:37:54] <fatih> now imagine that the worker 2 get's that immediately after worker 1 is done with it
[07:38:03] <kali> you need to set the field to "done",
[07:38:06] <kali> not clean it
[07:38:21] <fatih> what I'm saying is (or try to explan), I need a 5 minutes interval there
[07:38:29] <fatih> kali: yeah saying done
[07:38:30] <fatih> hmm
[07:38:42] <fatih> maybe I can add also a timestamp and I can look at it
[07:38:51] <fatih> if it's done and it has been already 10 minutes
[07:38:56] <fatih> one can pick it up again right ?
[07:39:00] <kali> you want the docs to be processed once every 10 miinutes ?
[07:39:06] <fatih> yeah
[07:39:09] <fatih> or say 5
[07:40:25] <kali> then yes, use two fields: one is "locked_by" (or assignee) and the other will be "done_at"
[07:40:54] <Boomtime> .. and compound index
[07:40:56] <fatih> alright sounds a plan
[07:40:57] <fatih> nice
[07:41:02] <fatih> kali: https://blog.serverdensity.com/replacing-rabbitmq-with-mongodb/
[07:41:15] <fatih> one of the commenters said:
[07:41:17] <kali> in the findAndModify, you look for unassigned jobs (locked_by:null) and done_at:{$lt:now()-5min}}
[07:41:20] <fatih> "Eventually I hit a point where Mongo did become a bottleneck (the consumer contention got too much) and I moved to a combination of Beanstalkd and MongoDB. MongoDB now holds the data and Beanstalkd holds the IDs in queue."
[07:41:55] <kali> fatih: i have a run a million user site with mongodb as a queue system. don't bloat the infrastructure for nothing. 30k docs is nothing.
[07:42:16] <fatih> you are right, really I don't want to over engineer too
[07:42:26] <fatih> I'm jusst asking because I'm curious and don't want the wrong thing
[07:43:02] <fatih> because an additional stack just causes problems too, that's why I asked here, to ask how to solve it
[07:43:23] <fatih> but thanks! i've got now some clues let me implement an prototype
[07:43:50] <kali> fatih: there is one more thing you may want to consider
[07:44:03] <kali> fatih: if a worker crash, it will keep the lock forever
[07:44:13] <fatih> yeah I've thought that
[07:44:35] <fatih> I have a assignedAt field and a timeout
[07:44:39] <kali> ok.
[07:44:52] <fatih> say if it's more than 1 minute, I assume that worker is dead
[07:45:04] <fatih> and using that documentation
[07:45:34] <kali> if that's of interest, one of my implementations lives here: https://github.com/kali/extropy/blob/master/core/src/main/scala/MongoLock.scala
[07:45:57] <fatih> ok actually all I need is, to use my existing assignee logic for queueing
[07:46:28] <fatih> > one is "locked_by" (or assignee) and the other will be "done_at"
[07:46:40] <fatih> I have currently: "assignee.name": and "assignee.time"
[07:46:50] <kali> yeah, if the done_at gets to old, you can assume a timeout
[07:47:04] <fatih> assignee.name is the current holder, and assignee.time is the time the worker started on that
[07:47:06] <kali> no need for two fields
[07:47:18] <fatih> yeah why ?
[07:48:21] <kali> because if done_at is old, it hints at a worker crash, without requiring the assignedAt
[07:48:57] <fatih> ah ok, but `done_at` is also another field
[07:49:22] <fatih> therefore, I need two fields, one is called "locked_at" and one is called "done_at"
[07:49:25] <fatih> is that correct ?
[07:49:34] <fatih> *locked_By
[07:50:19] <kali> i think so, yeah
[07:50:32] <fatih> alright
[07:51:08] <fatih> thanks a lot, I'm going to check those options
[07:51:59] <kali> you're welcome
[10:04:00] <T3h_N1k0_> Hi, I've got a pb with a mongo cluster, the Django application throw this error "OperationFailure: database error: socket exception [SEND_ERROR] for 10.5.8.83:27017"
[10:04:13] <T3h_N1k0_> but I can't find anything in the mongo node log
[10:04:39] <T3h_N1k0_> does anyone got an idea ?
[10:04:39] <jordana> Was it ever working?
[10:04:43] <T3h_N1k0_> yes
[10:04:51] <T3h_N1k0_> and it is still working
[10:05:00] <T3h_N1k0_> but some request throw this error
[10:05:37] <jordana> Has that always happened?
[10:05:44] <T3h_N1k0_> no
[10:05:55] <jordana> What's changed since?
[10:05:57] <T3h_N1k0_> it started early this morning
[10:06:03] <rspijker> if there is nothing in the mongo log, it might be a network issue?
[10:06:21] <rspijker> does that show as underlined for you guys as wel? :/
[10:06:25] <jordana> Yeah
[10:06:27] <jordana> no
[10:06:27] <jordana> what?
[10:06:31] <jordana> one message was
[10:06:39] <jordana> that last one doesn't
[10:07:12] <rspijker> hmm, might have hit a weird key combo on my client then
[10:07:24] <jordana> T3h_N1k0_, yeah it could be a network issue but it could be a driver issue or issue with the host running the django instance
[10:07:41] <T3h_N1k0_> rspijker: we thought it could be a network issue too, but the network seems all good
[10:07:44] <jordana> whats your django host running?
[10:07:48] <jordana> distro?
[10:07:58] <T3h_N1k0_> all our servers are Ubuntu
[10:08:30] <T3h_N1k0_> 10.04.03
[10:09:32] <jordana> You don't run iptables do you?
[10:09:45] <T3h_N1k0_> yes I do
[10:10:20] <rspijker> what’s the connection count like on your mongo instance?
[10:10:42] <T3h_N1k0_> around 400
[10:10:47] <T3h_N1k0_> little less
[10:11:18] <rspijker> hmmm, and if you do ulimit -a on the server?
[10:12:36] <jordana> if you login to mongo and do
[10:12:37] <jordana> db.serverStatus().connections
[10:12:42] <jordana> what are your max available?
[10:13:24] <T3h_N1k0_> http://pastebin.com/u79DNrJX
[10:13:53] <rspijker> open files is a bit low on 1024
[10:14:30] <T3h_N1k0_> http://pastebin.com/rUrYmQ77
[10:16:01] <rspijker> it might be your open files T3h_N1k0_ http://docs.mongodb.org/manual/reference/ulimit/
[10:16:25] <rspijker> open files is misleading, since it’s actually file descriptors which, I think, also govern things like sockets...
[10:17:21] <T3h_N1k0_> OK
[10:17:32] <T3h_N1k0_> I'll keep that in mind if the problem comes back
[10:17:36] <T3h_N1k0_> thank you !
[10:53:53] <phaz3r> hi there
[10:53:56] <phaz3r> '\0' not allowed in key: \0...
[10:54:04] <phaz3r> wasat and how to avoid this error message?
[11:22:17] <rspijker_> phaz3r: null character is not allowed in keys, is what it looks like
[11:22:28] <rspijker_> you can avoid it by not putting that character in keys...
[11:52:22] <Hyperking> would mongodb be better than writting a REST api? both use json to structure data...not sure, but I want a slim stack
[11:53:36] <jordana> Hyperking, wouldn't you write a REST API on top of your MongoDB database?
[11:53:50] <jordana> Either way there'll be some processing needed
[11:53:50] <Hyperking> I developing a news website. Data will be dynamically feed into the site by either a json rest output or connect to a mongodb server.
[11:54:54] <jordana> MongoDB -> REST API (input streams, output streams) -> clients
[11:58:28] <Hyperking> jordana: Im not completely sure. is MongoDB serving data to a rest service and then to the client? Looking for something as slim as angularjs
[12:00:23] <jordana> Hyperking: Yes, like I said, MongoDB -> REST -> client
[12:00:36] <jordana> you're AngularJS app is the client in this instance
[12:00:40] <jordana> your*
[12:01:25] <jordana> Your REST API can be written in NodeJS, if you're thinking about keeping your stack simply all you need there is JS
[12:01:46] <jordana> simple* - jeez my typing is all over theplace
[12:01:48] <jordana> ..
[12:03:33] <Hyperking> jordana: Ok i see now, before I was using a flat json file which is different from a server rendered json output
[12:03:55] <Hyperking> my REST point could be in any language or framework correct?
[12:04:11] <jordana> Hyperking: Yes
[12:07:25] <Hyperking> jordana: REST endpoints would be urls that output data? if so would this be the same as writting a php page that outputs formatted json like a html page
[12:43:14] <jordana> Hyperking: Yes. Your PHP endpoint would query your MongoDB and output the documents as JSON (probably with a little processing PHP side to convert things like dates etc)
[12:46:40] <Hyperking> thanks jordana, Is the REST endpoint needed? MongoDB -> client (php or angularjs)
[12:47:30] <Hyperking> it's a read only site with no CRUD operations
[12:47:51] <jordana> Hyperking, your angularJS would not directly query the database, it would request data from your API
[12:48:03] <jordana> so you need to have something that angular can talk to
[13:22:32] <mjburgess> hi. i have two collections a,b in are documents which have a field refering to a field in b, eg. in a {"x": "SOME-WORDS", "y": "DATA"} and in b {"x":"SOME-WORDS", "q": 1, "p": 2} i would like a result which replaces SOME-WORDS in a with the full document in b, ie. {"x": { document from b... }, "y"...}
[13:23:52] <mjburgess> hi. i have two collections A,B . In A documents have a field refering to a field in B, eg. in A {"x": "SOME-WORDS", "y": "DATA"} and in B {"x":"SOME-WORDS", "q": 1, "p": 2} i would like a result which replaces SOME-WORDS in A with the full document in B, ie. {"x": { document from B... }, "y"...}
[13:24:06] <mjburgess> *(sorry A/a was confusing)
[13:55:38] <Frozenfire_> Hello all. I was wondering about $limit for the aggregation pipeline. Is there a way to specify no limit give me all documents? http://docs.mongodb.org/manual/reference/operator/aggregation/limit/
[13:55:51] <Derick> don't use limit
[13:56:02] <Derick> in that case...
[15:43:18] <__nesd> hi
[15:45:36] <__nesd> i wanted to try today the compact command and i was quite surprised when i saw that it actually made my collection take more place (see http://pastebin.com/rchjrv7W) did you ever heard of such a case or have a idea of why such a thing could happend?
[15:46:55] <__nesd> the indices seem to be much smaller (~50%) but the total size got bigger (~20%)
[15:57:57] <asd3syd2> i need to store about 100k vocabularies - is mongodb up for the task and does it offer an equivalent to SQL LIKE queries?
[15:58:52] <cheeser> mongodb has a query language, yes...
[17:00:51] <s2013> anyone here used dynamodb? copmared to mongo?
[17:26:30] <oblio> what's the best way to allay two operations being performed on a document at the same time
[17:26:56] <oblio> e.g. im playing with an API that uses mongo and 2 updates come in for the same document, and the document is locked by one update so the other fails
[17:28:55] <oblio> i'm guessing this is specifically why people will hybridize with mongo and a sql database, but i guess i'm just wondering if i'm missing something with terminology, if there is a way to have mongo wait for one of the operations to complete and queue the other
[17:29:13] <oblio> or if i need to code around it
[17:29:43] <jordana> oblio: my understanding is that internally operations do queue
[17:30:01] <jordana> oblio: how long does the update take?
[17:30:22] <oblio> not very long, just i cant guarantee that the two ops wont happen at the same time
[17:30:48] <jordana> does it matter what order they happen?
[17:33:09] <oblio> jordana: nope
[17:33:48] <jordana> I don't really understand why it would fail, they should just wait for the other to complete. Is it a timeout of some form?
[17:33:59] <jordana> The failure?
[17:34:02] <oblio> right so, what's actually happening is the document doesnt exist yet
[17:34:06] <oblio> with one of them
[17:34:17] <oblio> and the record is locked
[17:34:29] <jordana> Ahh
[17:34:45] <jordana> Are you not using upsert?
[17:40:38] <oblio> jordana: well, it goes a little deeper. i guess i didn't have a full scope of it.
[17:43:48] <jordana> oblio: I can't really tell from what you've said but if you need to perform this as a transaction from the sounds of it an application level message queue might be better suited
[19:11:29] <wc-> hi all, im using the native mongodb driver for node, i cant seem to get an aggregate query to respect my cursor batchSize
[19:11:37] <wc-> i always see batchSize: 1 in the mongod log
[19:12:10] <wc-> im creating the cursor with the aggregate command, then calling cursor.get(err, results) then iterating on each result
[19:12:26] <wc-> has anyone ever seen this before?
[19:12:40] <wc-> im running mongodb 2.6.4
[19:13:43] <wc-> this is killing performance for me, been banging my head against it pretty hard
[19:34:05] <wc-> anyone know if this is an issue worth sending an email to the nodejs mongo driver mailing list about?
[19:45:12] <derek-g> so if I need to search for some documents in mongodb by a field - what exactly is happening behind the doors ? Does mongo scans entire collection?
[19:53:22] <whaley> derek-g: without an index in place, yes
[19:54:42] <derek-g> whaley so if I have ~40000 json document i want to store and search by headword field - is search gonna be fast?
[19:55:13] <derek-g> whaley, (given I have an index on that field).
[19:57:03] <whaley> derek-g: that depends entirely on your definition of fast..
[19:57:34] <whaley> derek-g: but for reads it will be more performant than without an index
[19:59:26] <derek-g> whaley, honestly it's just dictionary data that will be used by a handful of people editing entries via PHP frontend.
[19:59:57] <derek-g> whaley, im just thinking about frontend searching by headword, sorting etc.
[20:00:05] <whaley> derek-g: measure first before optimizing
[20:04:25] <whaley> derek-g: btw, I wouldn't worry much about 40k documents unless it's growing rapidly... my production system has a collection whose .count() function just returned 843,800,975 for me :)
[20:09:37] <wc-> i found the issue
[20:09:38] <wc-> https://github.com/mongodb/node-mongodb-native/blob/master/lib/mongodb/command_cursor.js#L20
[20:09:46] <wc-> anyone know why batchSize is being hardcoded to 1 for command cursors?
[20:10:11] <wc-> if i do an aggregate query with a cursor: {batchSize: 1000} its going to be overridden by this
[20:10:21] <wc-> but im afraid i am overlooking something
[20:11:31] <cheeser> iirc, command results are single documents
[20:13:56] <wc-> so am i misinterpreting the batchSize: 1 i see in the mongod log
[20:14:02] <wc-> when im running an aggregate query?
[20:14:09] <cheeser> i dunno
[20:14:22] <wc-> does it send a command cursor, then that returns a regular cursor or something like that
[20:15:18] <cheeser> "command cursor" isn't really a thing
[20:16:48] <wc-> so it looks like colleciton.aggregate creates an AggregationCursor
[20:16:55] <wc-> which then creates a CommandCursor
[20:20:24] <wc-> cant seem to get make test to work
[20:20:47] <wc-> or run i mean
[20:37:34] <benjwadams> I am having trouble updating an array of subdocuments. I am very confused by the documentation.
[20:38:21] <benjwadams> I want to update a single field within a an array of subdocuments. here is what i have attempted, to no avail
[20:38:50] <benjwadams> db.datasets.update({"services.asset_type": "RGRID"}, {$set: {"services.$.asset_type": "Grid"}}, false, true)
[20:55:54] <benjwadams> wow, so this isn't even possible: https://jira.mongodb.org/browse/SERVER-831
[20:56:13] <benjwadams> i presume i need to do a foreach over each element then?
[20:56:38] <benjwadams> It's mind boggling that this functionality doesn't exist. This would be trivial in SQL
[20:56:58] <benjwadams> especially 4 years after this request has been made
[21:13:20] <andrewrk> the array update operator $, acts as a placeholder to update the first element that matches the query document
[21:13:27] <andrewrk> what if there is more than one query condition
[21:13:50] <andrewrk> example, $elemMatch and also a $not: { $elemMatch }
[21:14:01] <andrewrk> I want the $ to represent the one that matched
[23:07:18] <faeronsayn_> Hey guys, i'm having some trouble with mongodump
[23:17:02] <MacWinner> if a node in a replset is down, will the working node slow down on responding to requests?