PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 18th of July, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:56:46] <sharondio> Hi all…anybody around? Just need to confirm that our schema isn't going to work before going through the trouble of changing it.
[02:50:15] <leekaiwei> hi guys, i'm getting the error class 'mongoclient' not found. i've scoured the internet, updated things and edited php.ini but still not working. doesn't really warrant a pastebin but here it is http://pastebin.com/kpDLyFuC
[03:13:10] <ranman> leekaiwei: I think that probably means you don't have the library installed correctly ?
[03:13:39] <leekaiwei> i've already ran sudo pecl install mongo
[03:13:44] <leekaiwei> and mongo.so exists
[03:14:03] <ranman> have you tried the manual installation?
[03:14:44] <leekaiwei> no i haven't
[04:40:04] <pplcf> I want to sort after limit after sort, something like db.col.find().sort(timestamp: -1).limit(10).sort(timestamp: 1)
[04:41:05] <pplcf> but it seems like last 'sort' just overrides first
[04:41:21] <pplcf> basically I just want last 10 docs sorted from oldest to newest
[07:27:32] <rbento> Hello. Is there a way to count DBRefs for a collection?
[07:29:34] <rbento> For instance, I have an Author collection. I shouldn't be able to delete an author if it is referenced say, in a Book collection.
[07:31:21] <[AD]Turbo> hola
[07:33:06] <kali> rbento: you need to implement this behavious in your app, mongodb will not enforce this kind of constraint
[07:35:19] <rbento> kali Thanks. I see that. I was thinking about having any method that helps in this case. Like collection.stats() etc...
[07:36:21] <rbento> kali Any helpful property available, etc...
[07:36:49] <rbento> kali I read the docs but couldn't find anything.
[07:37:14] <rspijker> rbento: because it doesn't exist. Mongo is not a relational db
[07:37:35] <kali> rbento: as for dbref, it's just a convention. MongoDB does not do anything clever with it
[07:37:40] <rbento> kali Sure, I know that. If I cannot find anything built-in I'll surely do it myself
[07:37:55] <rbento> kali Thanks
[07:39:02] <Nodex> you can probably do a find() and search for reference type
[07:39:56] <jbackus> Hey everyone. my friend and I are trying to figure out if MongoDB would be the right way to solve a database problem in our application. Would you all mind if I gave you a few sentence description of the problem so I can get your input(s) on whether Mongo would be the right way to go?
[07:40:10] <Nodex> referencing Authors in books is also not the greatest idea .... how often does an authors name change?
[07:40:29] <Nodex> jbackus : just ask the question is usualy best
[07:41:21] <jbackus> Yeah I thought so, just wanted to check since I sometimes join programming IRC channels and somehow get yelled at in the first minute
[07:41:22] <rbento> Nodex This was just a silly example
[07:42:08] <Nodex> indeed :)
[07:49:40] <jbackus> so our product is designed so that companies can verify customer identities. The verification criteria differs depending on the type of customer (i.e. U.S. Citizen vs. Non-U.S.). Some verification variables have multiple components (address->street,city,etc), some verification methods can be done one of many ways (identity through SSN, TIN, or License), and if a customer updates a variable (like address) we want to track the revision history for that.
[07:49:53] <jbackus> So basically the model in question (call it CustomerVerification) doesn't have any constant keys, some of the verification variables have multiple components (like a python dictionary), and some variables only need one verification for a array of options to pass. Does this sound right for Mongo to you guys?
[07:50:22] <jbackus> (My friend thinks Single table inheritance or multi table inheritance is the way to go)
[07:50:55] <puppeh> I have a collection which has a hash field
[07:51:51] <puppeh> and I want to update a document by inserting some key-value pairs into that hash field
[07:52:02] <puppeh> not replace the entire field
[07:52:06] <puppeh> just insert values
[07:53:10] <Nodex> jbackus : that sounds perfect for mongodb
[07:53:21] <Nodex> puppeh : look at $set
[07:53:59] <puppeh> $set replaces the field
[07:55:10] <puppeh> let's say the value of that field is { "13": "abc" }. I want to make it { "13": "abc", "14": "ddd" }
[07:55:15] <jbackus> That's what I thought! Thank you @Nodex.
[07:55:48] <jbackus> I have to ask though: Seeing as this is the mongodb room, you don't just say that for every case, right? :P
[07:56:02] <Nodex> you really shouldnt use numbers as keynames but if you must then ...{$set : { "foo.14" : "ddd" }}
[07:56:09] <Nodex> lol
[07:56:24] <puppeh> thx
[07:57:06] <Nodex> jbackus : I use mongo for everything from Job boards to CRM, CMS and Social networks. There is nothing I cannot make it do - admittedly sometimes wiht the help of external tools but mongo IS my primary data store
[07:59:13] <jbackus> Haha ok so a tiny bit biased :P
[08:00:07] <Nodex> not at all, I believe in the right tool for the job
[08:00:31] <jbackus> Yeah just poking fun
[08:00:35] <Nodex> why use relational when you dont need to and why use non relational when you dont need to :)
[08:00:37] <jbackus> Alright, I'll probably stick around for a bit as I think I'll try and port some of my SQL join-insanity to Mongo and see how it feels
[08:00:48] <Nodex> forget JOINS - lesson #1
[08:01:08] <Nodex> if you need to join it then you're probably modelling your data wrong
[08:01:55] <jbackus> Yeah… You would probably cry at the weird multi table abstraction I have solving the problem now then
[08:02:25] <jbackus> I'm going to give Mongo a try
[08:51:06] <Nodex> why does github always go down when I need it :/
[09:16:03] <puppeh> is there the possibility to increment multiple values in keys of a hash field with one query?
[09:24:49] <puppeh> ex. I have a field called actions
[09:24:53] <puppeh> which is of type Hash
[09:25:00] <puppeh> and right now is like this
[09:25:12] <puppeh> { "a": 1, "b": 2 }
[09:25:42] <puppeh> and I want to increment the "a" and "b" values by 1, is this possible to be done in a single operation?
[09:31:34] <rspijker> puppeh: afaik, Hash is not a possible type for a field
[09:33:35] <rspijker> do you mean nested document instead of hash?
[09:35:03] <puppeh> i basically use Mongoid
[09:35:06] <puppeh> the ruby ORM for mongo
[09:37:19] <rspijker> ok...
[09:37:35] <rspijker> well, if you mean what I think you mean, it is possible
[09:38:34] <rspijker> db.collection.update({},{$inc:{"actions.a":1,"actions.b":1}},{multi:1})
[09:38:59] <rspijker> that will increase the a and b fields in the action field for _all_ documents in the collection called collection
[09:42:06] <puppeh> hm
[09:42:12] <puppeh> nice will try it
[09:42:13] <puppeh> thx
[09:42:23] <rspijker> np
[09:53:31] <remonvv> Hola. Anyway know what to do with { "ok" : 0, "errmsg" : "clone failed" } when invoking movePrimary?
[09:56:20] <Derick> is it a partitioned collection?
[09:56:27] <remonvv> Yes
[09:56:32] <Derick> can you double check?
[09:56:34] <rspijker> replicated?
[09:56:57] <remonvv> Are you asking if the database we're moving has sharded collections?
[09:57:07] <Derick> yes
[09:57:39] <remonvv> { "_id" : "core", "partitioned" : true, "primary" : "shard0003" }
[09:57:46] <rspijker> there is a jura issue for this in a sharding+replication situation. But it's fairly old...
[09:57:52] <remonvv> No repset
[09:57:54] <remonvv> Just sharding
[09:57:55] <Derick> yeah, I just saw
[09:58:08] <remonvv> It's a spectacularly uninformative error too.
[09:58:27] <rspijker> aren't they all?
[09:58:46] <remonvv> Usually. People need to learn that error messages need to be descriptive. Make it a page long if you have to.
[09:59:16] <Derick> one of my pet peeves and something I tried fixing with the PHP driver
[09:59:25] <remonvv> Amen sir.
[10:00:02] <Derick> now it sometimes says too much…
[10:00:14] <remonvv> No such thing.
[10:00:49] <remonvv> Well, with the possible exception of error messages like "Gee, something went wrong here. Let me explain. Back in the day when Is started this project there was a null check here but I removed it hence this NullPointerException. Sorry."
[10:01:39] <Derick> :D
[10:01:40] <remonvv> Derick, any idea which ticket this is? Can't find it and we're currently having a $25k/month cluster refusing to scale down :s
[10:01:57] <Derick> remonvv: ticket?
[10:02:07] <remonvv> Oh I thought you said there was an open jira issue for this
[10:02:09] <Derick> remonvv: you were watching it the same time as I was
[10:02:12] <remonvv> Ah sorry, rspijker said it
[10:02:20] <rspijker> it's 1703
[10:02:24] <Derick> 1708
[10:02:29] <rspijker> but like I said, very old and dependent on replication being a factor
[10:02:46] <rspijker> o, 1708, sorry :)
[10:03:24] <remonvv> Yeah that doesn't seem it. There are no errors in the logs of either the current owner or the intended new primary owner
[10:03:41] <remonvv> Checking mongos log
[10:04:03] <rspijker> I presume you have the preconditions listed here satisfied? http://docs.mongodb.org/manual/reference/command/movePrimary/
[10:05:16] <remonvv> It should, it's part of the automated downscale process. Maybe something went wrong with draining. Let me check.
[10:06:31] <remonvv> Hm, shard is empty and drained and no longer part of the set.
[10:07:02] <remonvv> As in, doesn't have chunks anymore (it's still in shards collection).
[10:07:14] <remonvv> That should satisfy movePrimary prereqs afaik
[10:08:12] <remonvv> Let me connect to it directly and check.
[10:09:46] <rspijker> silly question, you sure this shard is the primary for any of the dbs?
[10:10:13] <remonvv> Yep, see paste above. It's primary assighed to "shard0003"
[10:11:00] <remonvv> I just checked and it has data for a non-sharded collection on both the source and target shard.
[10:11:03] <rspijker> and the movePrimary command you are using?
[10:11:12] <remonvv> Which means the original movePrimary did something wonky
[10:11:26] <rspijker> yeah.. taget shouldn;t have that
[10:11:59] <remonvv> I know, but it does. It shouldn't have that. During the upscale we automatically distribute primaries across shards.
[10:12:15] <remonvv> Hm, are movePrimary ops blocking or async?
[10:12:31] <rspijker> dunno, sorry
[10:15:46] <remonvv> Alright well I'm gonna tackle this after lunch. brb.
[10:36:13] <jpfarias> remonvv: backup / restore done
[10:36:23] <jpfarias> got the 2 machines up and sharded
[10:36:24] <jpfarias> :)
[10:45:38] <remonvv> jpfarias, great ;)
[10:57:25] <rahim> exit
[10:57:28] <rahim> doh
[11:40:42] <merpnderp> I just moved my dev to my local mac to use webstorm, but just realized I'll need to run mongo locally. I really don't want to do that. Has anyone had luck using mongo through an ssh tunnel?
[11:46:28] <rspijker> merpnderp: I haven't tried but I see absolutely no reason why it shouldn't work
[11:49:05] <Avish> Hey guys, anyone who knows about the C# driver around?
[11:49:54] <merpnderp> rspijker: using the same -L ssh flag that works for mysql I'm getting a can't connect error on 27017….bizarre
[11:50:14] <Avish> I'm trying to use Aggregation Framework from C#. I know LINQ support for it is not yet ready but I'd like to use something similar to Query<T> and the other builders to be able to use MemberExpressions on my document type instead of hard-coded strings.
[11:51:23] <rspijker> merpnderp: let me give it a go real quick :)
[11:52:35] <Avish> e.g. I want to do something like collection.Aggregate(Match<T>.Eq(x => x.SomeProperty, "someValue"), Group<T>.By(x => x.GroupProp).Value("count", Group<T>.Sum(x => 1))
[11:53:34] <Avish> Or even simpler, I'd like a simple way to translate a member expression like `x => x.SomeProperty.SomeInnerProp` to its named equivalent `someProperty.someInnerProp` using the mapping defined on T. Then I'll build the aggregation operator docs from that. Is this possible?
[11:54:10] <rspijker> merpnderp: works like a charm here...
[11:54:41] <rspijker> merpnderp: could it be that your ports are firewalled on the mongo host?
[11:55:45] <merpnderp> rspijker: I'm sshing in to the mongo host so connecting to 127.0.0.1:27017 should just be tunneled to mongohost:27017. Shouldn't it look like a localhost connection?
[11:56:34] <Avish> Basically, I want to do what QueryBuilder<T>.EQ does when it translates the expression to an element name. Anyone>
[11:56:35] <Avish> ?
[11:57:05] <rspijker> merpnderp: I used: ssh -L 22222:mongohost:27017 localhost
[11:57:14] <rspijker> then mongo --port 22222
[11:57:22] <merpnderp> rspijker: I'll try that
[11:57:43] <rspijker> you can use whatever port numbers, it's just that I have a mongos running locally for testing, so 27017bwas out of the quesiton
[11:58:12] <rspijker> -typos :/
[12:03:34] <rspijker> merpnderp: need to restart, let me know if it worked
[13:05:52] <shmoon> NOt ok for storage during an update WHYY??
[13:10:31] <remonvv> Yes.
[13:10:35] <remonvv> Or No.
[13:10:46] <remonvv> Or 42.
[13:12:13] <rspijker> sorted the primary move remonvv ?
[13:12:50] <remonvv> Nope. It ended up in a state that wouldn't allow it. Didn't have time to figure out why but I'm assuming an ill-timed mongos restart or something.
[13:14:28] <shmoon> dude
[13:14:41] <rspijker> sweet
[13:14:41] <shmoon> seems like i am unable to store keys that contain '.' anymore
[13:14:42] <shmoon> huh?
[13:14:49] <remonvv> dude!
[13:14:54] <remonvv> like, what?
[13:15:46] <shmoon> bDwait i will show
[13:15:49] <shmoon> wait i will show
[13:16:43] <shmoon> remonvv: http://pastie.org/8152366
[13:16:53] <shmoon> i had a php parser that stored that
[13:17:05] <shmoon> now when i do an update, for some reason when there's '.' it doesn works
[13:17:06] <shmoon> huh?
[13:17:15] <rspijker> it's not allowed anymore
[13:17:19] <Derick> you could never do this really.
[13:17:25] <remonvv> You could never do this.
[13:17:26] <remonvv> Right.
[13:17:26] <rspijker> I think it used to be driver dependent…
[13:17:32] <remonvv> Some drivers allowed it
[13:17:41] <Derick> rspijker: maybe, but it always caused problems
[13:17:45] <remonvv> Since it's technically allowed by the wireprot/bson
[13:17:58] <shmoon> dude
[13:18:02] <shmoon> its there in my DB trust me
[13:18:03] <remonvv> In any case, 2 points : 1) never user periods, 2) use better field names
[13:18:04] <shmoon> I can add
[13:18:07] <shmoon> i cannot update
[13:18:07] <Derick> shmoon: please stop using "dude"
[13:18:07] <shmoon> :|
[13:18:11] <shmoon> sorry
[13:18:24] <rspijker> Derick: I completely agree that it should not be done, just trying to explain to this "dude" why it could happen :)
[13:18:27] <remonvv> spaces in your fieldnames are a nightmare
[13:18:43] <shmoon> sorry about it. actually I can add, but not update when field has '.'
[13:19:16] <remonvv> shmoon, that's because the query validation is somewhat inconsistent between the "criteria" and "update" parameters of the update operation.
[13:19:26] <remonvv> In any case, possible or no. Just fix it properly and remove the period.
[13:19:30] <remonvv> And rename your fields.
[13:19:32] <remonvv> In general.
[13:19:44] <remonvv> I need to stop pressing enter too soon.
[13:20:02] <shmoon> hm ok
[13:21:03] <shmoon> oh now i see
[13:21:13] <remonvv> If you want my advice; all fieldnames should be predictably cased (lower, camel, etc. but be consistent) and have no spaces or other non alphanumeric characters.
[13:21:14] <shmoon> using name.middle you can set middle key inside name sub document
[13:21:40] <shmoon> true man
[13:21:43] <remonvv> Righto. That's why it's not allowed and why you shouldn't get a period in there even if your driver/import allows it.
[13:21:58] <shmoon> but i am kind of solving a complex problem and needed to have the app up asap, basically all field names are defined by the user (comes from html form)
[13:22:08] <shmoon> in a subcoument (embedded document) inside main documents
[13:22:39] <remonvv> shmoon, I understand but you'll never be able to get this schema to work. Your user input -> field name conversion should convert it to something that follows the rules mentioned above.
[13:23:19] <shmoon> hm
[13:23:32] <remonvv> And frankly allowing end users to determine your field names is on the dodgy end of things as well. You should probably go for a (field:"No. of pages", value:"64"} sort of schema then
[13:23:33] <shmoon> actually lets discuss this a bit, maybe i can find another way to solve my problem (requirement) - 1 sec
[13:23:39] <remonvv> Because yay indexes and all that.
[13:23:45] <shmoon> hm
[13:24:56] <remonvv> So, good luck cowabunga-ing your way out of that one ;)
[13:25:33] <shmoon> remonvv: so I have tables like this http://puu.sh/3FRz9.png - user can put in keys (top row) and then values, add rows/columns. I want to save it in a proper way and then be able to properly search later and do further operations like filter/comparison, etc. stuff. such tables belong to each product, and each product can have multiple such tables
[13:26:16] <remonvv> Right, so see my schema suggestion above ;)
[13:26:25] <remonvv> Which is indexable (word?) and searchable
[13:26:35] <Derick> shmoon: those are values for something, not keys. Keys should never be arbitrary (as they are now,as the user can use whatever they want)
[13:26:50] <shmoon> remonvv: hm i see
[13:26:57] <Derick> shmoon: remonvv's schema suggestion is what you should be doing
[13:27:44] <shmoon> ok let me give ti a though for a while
[13:33:45] <remonvv> Sure, that's a good alternative to listening to a couple of brilliant e-people
[13:34:37] <kali> +1 for remonv suggestions. never use variable key names in mongodb if you want to do anything with the data
[13:35:31] <shmoon> hm you seem right wtf did i do
[14:20:36] <Michael_> Hi channel
[14:21:08] <Michael_> i see a very weird behaviour on our servers that chunks are no longer split automatically
[14:22:10] <Michael_> usually we have a very uniform number of documents per chunk
[14:22:39] <Michael_> the average document size did not change
[14:23:00] <Michael_> but chunks are no longer splitted automatically for two days now:
[14:23:01] <Michael_> [249890126 - 249903126) ; 12941
[14:23:01] <Michael_> [249903126 - 249916126) ; 12860
[14:23:01] <Michael_> [249916126 - 249929126) ; 12931
[14:23:01] <Michael_> [249929126 - 249942126) ; 12984
[14:23:01] <Michael_> [249942126 - 249955126) ; 12987
[14:23:02] <Michael_> [249955126 - 249968126) ; 12746
[14:23:02] <Michael_> [249968126 - 1000000005) ; 23681
[14:23:03] <Michael_> [1000000005 - { "$maxKey" : 1 }) ; 0
[14:23:37] <Michael_> i did some manual splitting
[14:24:32] <Michael_> but unfortunately, the second top most chunk is again getting more documents and not being split
[14:24:32] <Michael_> [249968126 - 1000000005) ; 23681
[14:24:42] <Michael_> any ideas anyone?
[14:53:08] <remonvv> pastebin/pastie are your friend ;)
[14:53:09] <remonvv> Errors in your mongos or mongod log?
[14:53:42] <remonvv> And dump the verbose version of your sharding status in a pastie as well please
[15:31:07] <Nodex> boob
[16:08:03] <novice35> Hi all, I am new to mongodb and I tried to follow the tutorial "http://docs.mongodb.org/manual/tutorial/deploy-replica-set/" but when I tired to execute the command rs.add("vm_name") the following error occured : "exception: set name does not match the set name host vm_name:27017 expects
[16:08:40] <novice35> please can someone help me to figure out the origin of the problem
[16:20:30] <novice35> can someone help me please ?
[16:31:20] <remonvv> rs.add("vm_name:27017")
[16:31:59] <remonvv> Assuming vm_name is your actual hostname
[16:44:43] <FuZion755> hi all, i'm trying to reformat a date within an aggregate... when I try to concat the different component it yells at me because some of them are integers.. is there a way to cast integers to strings? I can't for the life of me find any way to do that in an aggregate
[17:00:19] <Chrishas> hi, the following gives me an error about argument 5: mongo_create_index(conn, "testdb.Users", key, 0, out);
[17:01:00] <Chrishas> although it's the same as the C Mongodb api tutorial, and it also complains about too few arguments
[17:06:10] <Chrishas> algernon: which is the alternative c driver?
[17:06:50] <algernon> Chrishas: https://github.com/algernon/libmongo-client
[17:26:57] <JeremyKendall> I have a question about updating multiple mongo collections in the same js file if anyone has a minute
[17:27:49] <Guest5537> I need to construct a query that only matches documents that have an array field whose first element in the array matches X. Is it possible to do positional stuff like this with queries?
[17:28:01] <sharondio> JeremyKendall: What do you mean the "same js file"?
[17:29:01] <JeremyKendall> I have a js file with multiple commands. I run it like 'mongo dbname file.js'
[17:30:56] <Chrishas> algernon: thx but the system I'm trying to run it on has an older version of automake tools and autoreconf shows an error, is there another way to compile this?
[17:31:10] <kali> JeremyKendall: http://docs.mongodb.org/manual/core/server-side-javascript/#running-js-scripts-in-mongo-on-mongod-host maybe that will help
[17:31:55] <sharondio> Guest5537: It is possible to do positional stuff in arrays when you know the index of the item you're looking for.
[17:32:15] <JeremyKendall> kal1: Thanks for the link. I'm not actually having an issue running the js files. My problem is with the commands *in* the files :-)
[17:32:39] <sharondio> Guest5537: If you don't know the item, you can do it with $ variables, but only one level deep. No arrays in arrays. #askmehowIknow
[17:33:05] <kali> JeremyKendall: well, you need to be more specific, then
[17:33:27] <algernon> Chrishas: you can run autoreconf -i on a newer system, copy the results over and compile away.
[17:34:23] <JeremyKendall> Here's the file: http://pastie.org/private/27g3umsrxvbymyni6xaata When I run it via the command line, the remove() works as expected, and so does the first update(). The rest of the commands don't seem to run.
[17:34:41] <JeremyKendall> If I run the file again, the second update() runs, but again the rest of them don't seem to run.
[17:34:45] <JeremyKendall> kal1: ^^
[17:35:35] <JeremyKendall> (I'm basing "don't seem to run" on the results of counting the collections to see if the fields I unset are actually unset)
[17:35:35] <kali> JeremyKendall: can you try with $unset : { <blah>: true } instead of these empty strings ?
[17:36:23] <kali> JeremyKendall: and show me the count queries you run to check too
[17:36:31] <JeremyKendall> kal1: Will do. I'll ping you once I try that.
[17:36:49] <Guest5537> sharondio: I do know the index I'm interested in. Is it as simple as doing a regular query with the first part of the array? {'array[0].field': {$eq: 'value'}}
[17:36:50] <JeremyKendall> kal1: Grabbing the count queries now . . .
[17:37:34] <sharondio> JeremyKendall: Is it possible you have an asynchronous issue? I'm still pretty new to server-side stuff, but finding async.js has changed my life.
[17:38:02] <JeremyKendall> sharondio: That's entirely possible, but I woulnd't know how to diagnose that.
[17:38:29] <sharondio> Guest5537: http://docs.mongodb.org/manual/core/update/#update-arrays The first code example seems to support that, but I haven't tried it.
[17:39:32] <sharondio> JeremyKendall: Well, when I find myself bashing my head against the wall with JS not behaving like I expect, I've learned to suspect Asynchronous processes. It's also nice to have async.js just be able to tell me when everything has processed, or run an error function if something is misbehaving.
[17:39:58] <jmar777> the mongo shell is synchronous
[17:40:06] <jmar777> (including when its scripted)
[17:40:08] <sharondio> JeremyKendall: The way your stuff is laid out, there is no error capture at all so you wouldn't even know if something were bombing.
[17:40:35] <JeremyKendall> kal1: Here are the counts I'm running http://pastie.org/private/kaoys9qe927i0oejhqina I snipped two of them because they're pretty much all the same
[17:40:45] <JeremyKendall> sharondio: How would you recommend laying out those commands?
[17:40:47] <sharondio> Sorry, you are doing try/catch. So that should catch it. Hmmm
[17:40:51] <JeremyKendall> :-)
[17:41:40] <sharondio> JeremyKendall: And if you run each command in the command-line individually in the same order, they work?
[17:41:49] <sharondio> (I'm assuming yes.)
[17:41:51] <JeremyKendall> They do, yes
[17:43:46] <sharondio> JeremyKendall: I'd probably put a simple callback on each call, logging that it ran just to see where it's actually stopping.
[17:44:50] <kali> sharondio: this is the mongo shell, not node.js, everything runs synchronously
[17:45:09] <Guest5537> sharondio: It looks like the dot syntax works for arrays in queries. {'array.0.field': 'value'} -- works as expected.
[17:45:09] <JeremyKendall> kal1: I replaced the "" with true for each $unset. The behavior remains unchanged.
[17:47:28] <sharondio> Guest5537: Glad to hear it. :-)
[17:48:12] <JeremyKendall> kal1: If it makes a difference, I'm running 2.0.7. Still waiting on sysadmin to upgrade mongo on our dev vms
[17:48:20] <JeremyKendall> :-(
[17:48:59] <sharondio> kali: Even synchronously, one of these calls is stopping everything. It might help to see which ones are actually finishing. Is there some kind of command to make the shell more verbose? I only do test queries in the shell. I do everything else in node (obviously).
[17:49:41] <kali> JeremyKendall: add print statements after each op to see if it's running till the end
[17:51:27] <sharondio> JeremyKendall: Check this out: http://stackoverflow.com/questions/9457368/inserting-data-to-mongodb-no-error-no-insert
[17:52:01] <sharondio> It's possible the updates are error-ing and not firing off an error.
[17:53:35] <JeremyKendall> kal1: Will do.
[17:53:43] <JeremyKendall> sharondio: Thanks for the link. Looking now.
[17:56:22] <sharondio> JeremyKendall: http://docs.mongodb.org/manual/reference/command/getLastError/ This might be useful.
[17:57:23] <JeremyKendall> Thanks. Looking.
[18:04:26] <JeremyKendall> kal1: I added print statements after all of the updates. It made it to each of them, but the script behaves the same as before.
[18:07:51] <kali> JeremyKendall: yeah. i'm not surprised
[18:08:11] <kali> JeremyKendall: can you show me what you do in the interactive session that does work ?
[18:08:31] <JeremyKendall> I simply copy and paste each command. Nothing special.
[18:10:07] <kali> JeremyKendall: can you add a print(db.c1.count()); at the top of you file, to check if we are really working on the right db ?
[18:11:35] <JeremyKendall> kal1: OK, this is weird. I added printjson(db.runCommand({ getLastError: 1, w: 1, wtimeout: 5000 })); at the bottom of the script (inside the try) and it worked the first time through.
[18:11:56] <JeremyKendall> Everything got deleted just as I expected it to originally.
[18:12:08] <kali> mmmmm...
[18:12:10] <kali> ok
[18:12:29] <JeremyKendall> kal1: I know.
[18:12:37] <sharondio> JeremyKendall: That is weird. And the getLastError didn't show any errors?
[18:12:46] <JeremyKendall> No errors.
[18:12:58] <JeremyKendall> "err": null
[18:13:05] <JeremyKendall> "ok": 1
[18:14:06] <kali> i'm confused about what's synchronous and what is not in the shell
[18:14:21] <kali> it may also have changed between 2.0 and 2.4
[18:14:36] <JeremyKendall> True. I certainly don't know.
[18:15:36] <JeremyKendall> I'm adding that getLastError after each command just to see what happenes. I'll let you know.
[18:18:01] <JeremyKendall> kal1 && sharondio: Added getLastError after each command, got no errors from any of the commands, and everything worked as expected.
[18:18:57] <JeremyKendall> Without those checks, the script runs in milliseconds. If it were actually doing the work, it would take much longer (there's a lot to delete).
[18:19:09] <JeremyKendall> That's diagnostic, but I have no idea what it indicates.
[18:21:10] <sharondio> JeremyKendall: I'm not sure leaving in those logs is the real answer, even if it does work.
[18:21:45] <kal1> kal1
[18:24:33] <pwelch> could someone answer a question about mongodb master slave?
[18:30:11] <Gargoyle> pwelch: Not if you don't ask it!
[18:30:19] <pwelch> Gargoyle: =)
[18:31:01] <pwelch> so I want to do what I currently do with mysql. I have a master and a slave. when I do a switch over I make a new master and connect it to the old slave to update the data
[18:31:18] <pwelch> I think point my app servers to the new master (that is connected to the slave)
[18:31:48] <pwelch> can I start a mongodb instance as a master and slave so I can just point to the box?
[18:32:03] <pwelch> *I then point
[18:34:52] <JeremyKendall> kali && sharondio: Wild goose chase. Turns out those commands were all running. It takes 10-ish seconds for them all to finish. The js file was just queuing them up. I was counting too quickly.
[18:35:10] <JeremyKendall> This time I waited about 15 secs, ran my count script, and everything that should have been zeroed out was.
[18:35:14] <JeremyKendall> /headdesk
[18:35:29] <Gargoyle> pwelch: The mongo driver takes care of failover.
[18:35:55] <pwelch> for replica sets correct? Im talking about master/slave replication
[18:36:43] <Gargoyle> pwelch: Not sure what mongodb master/slave is if you are not talking about a RS.
[18:37:33] <pwelch> same as MySQL master/slave. read/writes go to master and they get sent to the slave. You can "promote" the slave by pointing the writes to it
[18:38:34] <Gargoyle> Didn't know it did that. been out of the loop since Feb.
[18:39:30] <pwelch> form the docs mongodb master/slave is deprecated. Im trying to use it for when I build new nodes and need to point to the new one
[18:39:34] <pwelch> *from the docs
[18:39:43] <pwelch> I dont need 3 mongo nodes running
[18:40:03] <pwelch> just one. however, I want to chain them to keep the new one updated and then swap it out
[18:40:40] <eka> hi all, mongo is crashing with this message "warning: DR102 too much data written uncommitted 315.318MB" my setup is 4 mongodb in shard with 3 configs… it was running for a very long time, today started doing this
[18:41:10] <eka> version is 2.4.3 any clue?
[18:41:54] <Gargoyle> pwelch: I'd use the replica set instead of depreciated stuff. Just run an additional mongos process on one box as an arbitrator.
[18:42:25] <pwelch> Gargoyle: then I would have to force the one I want to become primary correct?
[18:42:54] <pwelch> and I would have to reboot my apps so the client lib (driver) knows about the new node
[18:42:58] <Gargoyle> pwelch: You just step down the Primary - the secondary will take over within a few seconds.
[18:43:09] <sharondio> JeremyKendall: It happens. I learned a new shell command out of it, so that's cool. :-)
[18:43:11] <Gargoyle> pwelch: Don't think you have to reboot.
[18:43:26] <pwelch> doesnt the applications mongo driver know about all of the nodes?
[18:43:41] <pwelch> if I add one then I have to add it to the config and reload/restart the app
[18:43:45] <JeremyKendall> sharondio: :-) Thanks for your help. You too, kali
[18:45:11] <Gargoyle> pwelch: Yeah, you can add more than one node address to your connection params. to make connecting easier when you don't know which is the master from a "cold start", but the client should also be able to figure out the other nodes once it has connected to one.
[18:47:03] <pwelch> Gargoyle: sorry, I dont fully understand how the client lib knows about all nodes. does it connect to the one you give it and then pull in info about the entire cluster?
[18:47:17] <Gargoyle> pwelch: Yup.
[18:47:47] <pwelch> if I add 3 more nodes but only told the client app about a single IP/FQDN it queries the info from that single node and learns about the others?
[18:48:22] <Gargoyle> Obviously, if the node you have configured is down when the app starts, it fails. So I think you would normally specify 2 or 3 in your connection params, and let the rest be discovered.
[18:49:01] <Gargoyle> pwelch: Set up some VMs and have a play.
[18:49:40] <pwelch> Gargoyle: ok, I will. Was trying to get some info to deploy a solution today but this helps. thx
[18:52:51] <artdaw> hi guys, I need help in mongodb production setup
[18:53:55] <artdaw> I already have mongodb in prod but today it slows down completely and I'm really in stuck
[18:54:24] <eka> artdaw: check mongostat
[19:01:59] <artdaw> My mongostat says: http://pastebin.com/V1dhswmL
[19:02:37] <artdaw> I have a huge amount of inserts and my count queries are very slow
[19:02:58] <artdaw> What about sharding + replica sets?
[19:03:35] <eka> artdaw: you are low on memory… look the flush count… shard could mitigate that
[19:04:01] <eka> artdaw: sorry… fault count
[19:04:32] <artdaw> sorry, what does it mean?
[19:04:44] <artdaw> I see 0 everytwhere
[19:10:31] <artdaw> oh, I see, thaks, eka
[19:11:42] <eka> artdaw: I meant the faults count, that means that mongo reads from the Disk and that makes it slow
[19:15:12] <eka> artdaw: it's a very big DB
[19:17:02] <Ontological> So, am I not allowed to use the collection name of 'auth'? If not, where might I find a list of all such reserved names?
[19:29:32] <leander> Hi. When exactly the primary of a replication set becomes unavailable? For example if a sector of a harddisk becomes corrupt, mongodb will make the machine unavailable?
[19:59:58] <kevino> anyone know when the replSetMaintenance command was added?
[20:00:03] <kevino> is it not available in 2.0.4?
[20:06:12] <dimas> hi
[20:06:20] <dimas> where is oplog file located on replica set?
[20:10:10] <dimas> hi anybody
[20:10:13] <dimas> please help me
[20:12:35] <kevino> not much support here
[20:13:44] <eka> this is not tech support, this is users helping users….
[20:14:02] <eka> and if someone knows he will tell… be patient
[20:14:29] <eka> you can always try the mailing list
[20:14:49] <dimas> i understand
[20:15:03] <dimas> so, nobody know about oplog in here?
[20:15:11] <eka> I don't
[20:24:43] <kevino> any idea why i can connect to a node with the mongo client but in pymongo MongoReplicaSetClient it doesn't connect at all?
[20:31:36] <eka> kevino: http://api.mongodb.org/python/current/examples/high_availability.html#id1
[20:31:48] <kevino> i've read that
[20:32:03] <kevino> sometimes it works, sometimes it doesn't. makes no sense
[20:32:08] <kevino> i can always connect with the mongo client
[20:32:35] <eka> I think it's the way to go, with the MongoClient api… why you use MongoReplicaSetClient?
[20:32:42] <eka> let MongoClient take care of that
[20:32:54] <eka> don't know much about it…
[20:33:05] <kevino> uh because it says to? http://api.mongodb.org/python/current/examples/high_availability.html#mongoreplicasetclient
[20:33:12] <kevino> you get the monitoring in the background and secondary reads
[20:33:43] <eka> I see… and your nodes are all working fine?
[20:34:08] <eka> I had the problem of many socket connections in the kernels… do you have many connections or not?
[20:34:14] <eka> any error in the log?
[20:36:13] <alexr2> i'm trying to remove all items from an array of objects (product_limits) using an array of ids... any suggestion on how to get this working? -- something like $pullAll: {"product_limits._id": id_array}
[20:37:18] <kevino> eka: no, everything is working fine otherwise. all of my clients from the web servers connect without a problem
[20:37:26] <kevino> i'm just trying to run utility scripts from my machine
[20:37:45] <eka> kevino: did you try writing to the mailing list? now it seems like lunch time
[20:37:47] <eka> here
[20:37:56] <kevino> yeah, already have one thing up
[20:37:58] <kevino> we'll see
[20:38:25] <eka> kevino: I have my mongo shard crashing on me :P
[20:38:44] <kevino> awesome
[20:38:46] <kevino> mongo sounds great
[20:39:35] <eka> yes… lol… but I can't find something like it… I will have to wait for rethinkdb to get serious on sharing and speed
[21:27:13] <Guest5537> exit
[21:56:23] <codeoclock> Anyone installing via macports have experience with this error?
[21:56:23] <codeoclock> ---> Computing dependencies for mongodb
[21:56:23] <codeoclock> Error: Cannot install mongodb for the arch(s) 'i386 ppc' because
[21:56:23] <codeoclock> Error: its dependency v8 only supports the arch(s) 'i386 x86_64'.
[21:56:23] <codeoclock> Error: Unable to execute port: architecture mismatch
[21:56:35] <codeoclock> I'm using a retina display macbook pro :/
[21:56:47] <codeoclock> definitely not powerpc
[23:09:45] <EmmEight> Anyone going to the thing in Denver on the 31st?