PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 13th of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:13:02] <gustonegro> _m: yeah, it only creates objects. I do: db.things.update({_id: ObjectId("5048ccc355d87a0178000003") }, { "$set" : { "content.foo.3.bar" : "5" } } )
[00:13:12] <gustonegro> and I get: { "__v" : 0, "_id" : ObjectId("5048ccc355d87a0178000003"), "content" : { "foo" : { "3" : { "bar" : "5" } } } }
[00:13:51] <gustonegro> is there no way in dot notation to specify an array? I tried "foo[4].bar", but it just creates a field called "foo[4]"
[00:20:11] <_m> gustonegro: Have you tried something like db.things.update({_id: ObjectId('123asd'), 'content' : { 'foo' : { '$' : {'bar' : 5 } } } })
[00:20:23] <_m> Relevant: http://bit.ly/RJYFLs
[00:20:36] <_m> I'm not 100% certain that addresses your issue
[00:24:03] <gustonegro> _m: I get: assert failed : need an object
[00:25:51] <gustonegro> _m: when I fix up the objects with: db.things.update({_id: ObjectId("5048ccc355d87a0178000003")}, {'content' : { 'foo' : { '$' : {'bar' : 5 } } } })
[00:26:01] <gustonegro> it says: uncaught exception: field names cannot start with $ [$]
[00:27:03] <gustonegro> if I do: db.things.update({_id: ObjectId("5048ccc355d87a0178000004")}, {'content' : { 'foo' : { 4 : {'bar' : 5 } } } })
[00:27:10] <gustonegro> it also creates an object, not an array
[00:56:21] <_m> gustonegro: Have a solution for you (I think)
[00:57:50] <_m> It's a bit verbose, have a gist: https://gist.github.com/3711089
[00:58:06] <_m> If I've missed the mark on what you're trying to do, please let me know.
[01:09:59] <gustonegro> _m: yeah, that works, but is like you say verbose....in which case I wouldn't need dot notation
[01:10:59] <gustonegro> I would _assume_ that dotnotation using an update with { $set : { 'content.foo.0' : {bar: 8} } } would create and array at "foo" and not an object (if the object or array doesn't already exist)
[01:11:38] <gustonegro> but the dot notation doesn't really have the syntax to declare thate.
[01:12:02] <gustonegro> even better, I think would be if dot notation had a concept of array ...like: content.foo[0]
[01:21:00] <_m> Yeah… good luck in the hunt. Sorry I wasn't able to shed more light.
[01:25:00] <gustonegro> _m: no worries...thanks for the help! I'm guessing it's just not possible what I want
[01:31:02] <tomlikestorock> I'm trying to add a new secondary to my mongo replica set, and I ran rs.initiate and now I have two primaries. How do set the new node as a secondary?
[02:09:52] <ejcweb> Is it possible to search for documents where one field is greater than another, for example?
[02:14:15] <crudson> ejcweb: you can do this with http://docs.mongodb.org/manual/reference/aggregation/#aggregation-pipeline-operator-reference
[02:23:12] <ejcweb> Thanks crudson. Are you able to point me towards a specific example for comparing fields of the same document?
[02:23:47] <crudson> ejcweb: I can write one, one sec
[02:26:05] <ejcweb> Thanks!
[02:29:08] <tomlikestorock> what php driver version do I need to connect to mongo2.2? I keep getting "could not determine master" even though all the mongos say there's a master (and it's reported as the same on all of them)
[02:42:04] <crudson> ejcweb: you have the option using $where, like this: http://pastie.org/4711504
[02:42:48] <ejcweb> Ok. That won't run in MongoLab, right?
[02:44:37] <crudson> ejcweb: why not?
[02:45:06] <ejcweb> "MongoLab does not support JavaScript via the UI"
[02:45:26] <ejcweb> (Sorry, I should have been specific to the UI)
[02:45:56] <crudson> ejcweb: oh I don't know anything about it. A quick search showed a mongo hosting company. They don't give you a shell to use?
[02:54:42] <crudson> ejcweb: oh just REST
[02:55:10] <crudson> oh it says you can connect with one of the drivers
[02:55:13] <crudson> you'll be ok
[02:56:22] <crudson> unless you only want to use the rest api in your application
[08:02:57] <[AD]Turbo> hol
[08:02:58] <[AD]Turbo> a
[08:05:59] <crudson> hell
[08:05:59] <crudson> o
[08:11:51] <Gargoyle> Can you match on a partial sub doc ?
[08:12:09] <Gargoyle> so if a nested doc had say 3 elements, match on just two of them?
[08:12:25] <Gargoyle> Eg. http://pastie.org/private/sn4odza6jfgapplncm5hfa
[08:15:33] <Gargoyle> could I then query and fetch docs using find({newsletters: {type: B2B, enabled: true}})
[08:15:47] <Gargoyle> Without needing to specify the value of foo ?
[08:16:41] <crudson> Gargoyle: yes, but ensure you use $elemMatch and dot notation for the attributes rather than a subdocument as in your example
[08:17:12] <Gargoyle> ahh. Thanks crudson
[08:23:29] <NodeX> Gargoyle : did you get your google geocode thing sorted?
[08:24:00] <Gargoyle> NodeX: Not had chance to look at it. But will be interested in your dataset though.
[08:24:22] <NodeX> the full UK one inc the postcodes?
[08:24:38] <Gargoyle> yup
[08:26:04] <NodeX> it has 41k placenames, 1.8m full postcodes, all of the part (first part) of the postcodes, I can probably add counties in there for you too
[08:31:46] <crudson> Gargoyle: my mistake, you don't need dot notation. this will work for you: db.garg.find({newsletters:{$elemMatch:{type:'B2B',enabled:true}}})
[08:32:03] <crudson> it's v late here
[08:32:25] <Gargoyle> crudson: he he, just got to that bit on the $elemMatch page! Thanks though.
[08:32:36] <Gargoyle> crudson: Goto bed and get some sleep! :)
[08:32:58] <crudson> Gargoyle: anytime. hehe eventually...
[08:43:43] <NodeX> you can use dot notation or $elemMatch in that situation
[08:58:38] <arussel> I'm trying to find how to do "select count(*) from foo", I 've tried db.foo.size(), db.foo.length() and db.foo.count() without success. Is there a way ?
[09:00:21] <NodeX> you want the total results in a collection?
[09:00:27] <arussel> yes
[09:00:28] <NodeX> db.foo.count()
[09:00:33] <NodeX> use your_db
[09:00:34] <NodeX> db.foo.count()
[09:01:14] <arussel> thanks, I've tried it but mispelled it :/
[09:01:27] <NodeX> ;)
[09:01:55] <Gargoyle> db.foo.cunt() !?? ;-)
[09:29:33] <NodeX> LOL
[10:18:35] <driek> anyone using mongodb from aptitude did an upgrade to 2.2 already? does the recommended way of upgrading work well in an aptitude scenario?
[10:20:11] <marcqualie> it worked fine for me, I have a replica set and upgraded secondarys first
[10:23:45] <driek> marcqualie: and did you shut them down, aptitude install, started them again?
[10:25:46] <marcqualie> the apt system automatically shuts them down and restarts them for you
[10:27:01] <driek> in the setup I have to upgrade, daemontools is used instead of sysinit. But I think the mongod's will keep running (the old version) if I only shut down mongos, do the upgrade, and restart mongos
[10:27:15] <driek> would like to hear that worked for someone else though ;)
[10:29:15] <marcqualie> mine worked fine, but I'm not using shards so mongos didn't play any part in the upgrade
[10:30:08] <driek> ok, well thanks for the info so far
[11:22:35] <skieter> Following EC2 Quickstart on mongodb.org: Can anyone tell me why you'd set up RAID10 rather than just RAID0 - EBS is supposed to already have redundancy?
[11:36:47] <balboah> skieter: I haven't looked at the guide but I agree with you
[11:37:31] <ron> skieter: http://serverfault.com/questions/295324/why-mongodb-docs-recommends-to-run-raid-10-on-ebs
[11:47:45] <falu_> i've got one update() call with pymongo that works on my local machine but not on my server. i can insert() a copy of a document (that isn't being updated on the server) on my local machine and there the same update() call will work. both run the same mongodb version.i have no idea how to find the cause.
[11:48:42] <skieter> Thanks ron.
[11:56:00] <skieter> balboah: yes, this article seems to refer to some benchmarks that actually show RAID10 to be much slower than RAID0, even thought it does not make sense http://www.nevdull.com/2008/08/24/why-raid-10-doesnt-help-on-ebs/
[12:11:06] <Gargoyle> Mongo still can't count!
[12:11:09] <Gargoyle> 690402 of 453696
[12:15:37] <balboah> skieter: also perhaps the "ebs optimized" option on amazon came after those articles
[12:15:49] <balboah> haven't looked into that myself
[12:15:56] <balboah> not running mongo on ec2 yet ;)
[12:17:09] <skieter> yes, that article is a tad old
[12:18:22] <falu_> found the cause for my problem above. i had changed indexes in my pymongo code and the server database didn't care and is still using the old index settings.
[12:28:02] <moses> Hi all: Need help on extended JSON
[12:28:08] <moses> can I pass $or / $and
[12:28:11] <moses> in a query
[12:29:28] <algernon> moses: http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24or
[12:57:30] <jmar777> matubaum: you shouldn't need to use the ordinal value - that's mostly used internally to insure that all generated Timestamps are unique (within the same mongod process, anyway)
[13:03:13] <matubaum> jmar777: thanks , but how can I create a timestamp to include it inside a query to ask for documents with "created" less than this timestamp?
[13:03:14] <matubaum> I was doing it like this, but it doesn't work: db.cacheproduct.find( {"created": { "$lt": new Timestamp("1347538548001", 1) }} )
[13:03:14] <matubaum> ordinal attribute is mandatory
[13:05:24] <jmar777> matubaum: hm... i'm honestly not that familiar with it. the docs specify a little behavior around them, but it's considered an internal data type: http://www.mongodb.org/display/DOCS/Timestamp+data+type
[13:06:11] <jmar777> matubaum: i always just store timestamps numerically
[13:07:27] <matubaum> jmar777: yeap, but that doc doesn't say to much, it doesn't say anything about Timestamp Api
[13:07:50] <matubaum> I'm using Doctrine Mongo ODM to create this documents
[13:08:15] <matubaum> so It seems to use Timestamp object instead of setting it numerically
[13:08:52] <jmar777> matubaum: seems like an issue with the ODM then - maybe you should define your type as a number rather than a Timestamp
[13:09:25] <jmar777> matubaum: i'm surprised it wouldn't use a Date object by default - that's the supported date/time API in mongo
[13:10:00] <jmar777> matubaum: i think it's risky for you to allow it to use a Timestamp, just because you'll be dealing with a lot of unspecified behavior
[13:11:12] <matubaum> jmar777: I'm going to use a number as you say
[13:11:23] <matubaum> jmar777: many thanks!
[13:11:28] <jmar777> matubaum: np
[13:29:42] <jcromartie> isn't preallocating files by default a massive premature optimization?
[13:30:04] <jcromartie> and a terrible assumption when it comes to things like SSDs, VMs, etc?
[13:30:11] <jcromartie> considering where the server world is going...
[13:30:35] <Gargoyle> jcromartie: I't doesn't pre-allocate all of them!
[13:30:48] <jcromartie> not all of them, just as many chunks as it needs
[13:31:13] <jcromartie> i.e. floor(data / chunkSize) + 1 chunks
[13:31:38] <Gargoyle> jcromartie: What's wrong with that?
[13:32:11] <jcromartie> because it's an assumption about the underlying filesystem
[13:32:53] <jcromartie> like I said: it doesn't make any sense for virtual machines, SSDs, and possibly many other cases
[13:32:58] <jcromartie> all of my Mongo instance run on VMs
[13:33:10] <Gargoyle> I thought it was because it uses memory mapped files, and a certain amount of "working space" is required
[13:33:11] <jcromartie> and I have to turn of preallocation because I don't use that much data
[13:33:22] <jcromartie> it can run without preallocation just fine
[13:36:43] <jcromartie> I know preallocation is valuable for big write-heavy instances on magnetic disks
[13:36:59] <jcromartie> maybe that's how most people use Mongo :)
[13:37:31] <Gargoyle> and read!
[13:37:43] <Gargoyle> magnetic discs in general!
[13:38:23] <Gargoyle> And unless you are running some custom hardware, then your VM storage is probably on a SAN/iSCSI somewhere ?
[13:38:35] <jcromartie> hm, I wonder if new versions would preallocate
[13:44:08] <statusfailed> I started a MapReduce operation, and stupidly tried to kill it with ctrl+c
[13:44:14] <statusfailed> which seems to crash the client
[13:44:20] <statusfailed> now it's not showing up in db.currentOp()
[13:44:22] <statusfailed> how do I kill it?
[13:45:04] <statusfailed> (I'm pretty sure it's running, the box is at 120% CPU)
[13:46:33] <statusfailed> client & server are versions 2.0.5
[14:08:18] <jiffe98> hmm, mongod looks to have allocated 5.4TB of ram
[14:08:33] <cedrichurst> that seems like a lot of RAM
[14:53:07] <IAD1> jiffe98: mongo use mmap to map files in RAM. it you have much RAM - all data will be in memory. http://www.mongodb.org/display/DOCS/Checking+Server+Memory+Usage#CheckingServerMemoryUsage-MemoryMappedFiles
[14:57:52] <falu_> what happens when my pymongo app which creates/ensures indexes in a collection is running and i drop the index of a collection from the shell and re-create it? will requests while the index is gone still be served?
[14:59:22] <balboah> it will block
[14:59:35] <balboah> I think
[14:59:41] <balboah> at least it won't be fast :)
[15:01:27] <falu_> balboah: hm, slowness would not be a problem. blocking would. i tried it on my dev machine and didn't notice any differences while the index was gone, but i'm not sure if i am overseeing the problem here.
[15:02:51] <kali> falu_: creating an index without the background flag will block
[15:03:58] <falu_> kali: okay, my question was rather what happens while the index is not available? will my app stumble because it thinks there is an index?
[15:04:12] <kali> nope, it will scan
[15:04:14] <kali> and be slow
[15:04:18] <kali> possibly get stuck
[15:04:48] <kali> there is an option you can start mongo with, that will stricly forbid a query to perform a table scan
[15:04:58] <falu_> kali: okay, that's fine and will work in my case. thanks for answering!
[15:05:01] <kali> but i gather it's not what you're looking for.
[15:05:59] <falu_> well, actually i have another question... when i change an option for an index (like enabling sparse) with pymongo, the index won't be changed accordingly without manual intervention. is that correct?
[15:06:43] <kali> last time i checked, with the ruby driver and the javascript shell, you had to drop the index and create it again
[15:07:23] <falu_> kali: okay, that could be intended by the devs. thanks.
[15:07:32] <kali> it makes sense, actualy. what you are trying to do will alter deeply the nature of the index, so...
[15:10:18] <falu_> kali: exactly. that's why i think it could be well reasoned that it must be done manually.
[15:20:51] <jiffe98> IAD1: gotcha
[15:55:40] <gustoneg1o> I've been having serious difficulty trying to get mongodb to update values in a document. I want it to create the value if it doesn't exist, including the entire path up to it.
[15:56:05] <gustoneg1o> I also want it to create array where appropriate instead of an object.
[15:56:25] <gustoneg1o> right now, mongodb only creates objects with dot notation inserts
[15:56:51] <gustoneg1o> is there ANY way to do this? this seems like a very big limitation of mongo
[16:02:06] <jjanovich> hey guys
[16:05:46] <jjanovich> have a question....hoping someone can help
[16:06:00] <jjanovich> does mongo support emoji in a string?
[16:21:34] <jjanovich> anyonehere?
[16:21:52] <jjanovich> bueller
[16:38:36] <IAD1> jjanovich: khm, emoji is included in UTF-8?
[17:01:29] <telmich> good day
[17:01:37] <telmich> how can I remove the primary from a replicaset?
[17:01:47] <jY> step it down
[17:01:51] <jY> then remove it from the config
[17:02:28] <telmich> jY: ok - can you point me to a document showing me how to make it a secondary?
[17:03:15] <jY> rs.stepDown()
[17:03:19] <jY> that will make it a secondary
[17:03:48] <telmich> thank you, jY
[17:04:03] <jY> http://www.mongodb.org/display/DOCS/Reconfiguring+a+replica+set+when+members+are+down
[17:04:10] <jY> same type deal though
[17:12:35] <wwilkins> Is there anyway to debug js inside the mongo console?
[17:43:52] <mandark> Hello world, one question : Is it possible in Python to insert a (key, value) in a SON at a specified position ?
[17:45:25] <Almindor> hey
[17:45:42] <_m> wwilkins: This may be helpful: http://bit.ly/OsLzSk
[17:45:44] <Almindor> how would you go about finding all documents which have 2 array fields of which length differs
[17:46:02] <Almindor> something like {a.length {$ne: b.length}}
[17:46:38] <kali> Almindor: you can't
[17:47:38] <kali> Almindor: or yo ucan using $where but it's using the javascript interpreter, so not for a production request
[17:48:08] <Almindor> kali: this is a datacheck one time
[17:48:27] <kali> Almindor: it will scan the collection, and be terribly slow
[17:48:45] <kali> Almindor: and if other bit of your app are using the javascript engine, they wil get stuck
[17:48:52] <Almindor> no app
[17:48:58] <Almindor> just in shell, as I said it's a consistency check
[17:49:10] <kali> try $where then
[17:49:14] <Almindor> thanks
[17:52:37] <Almindor> worked fine, found some incosistencies this way :)
[17:55:15] <mandark> So it's not possible, in Python, to insert a (key, value) pair in a SON object at a specified position ?
[17:56:36] <kali> Almindor: good
[17:57:55] <_m> mandark: Not sure how it would work in Python, but here are some examples in the console
[17:57:56] <_m> https://gist.github.com/3711089
[17:59:55] <mandark> _m: I'm ok with inserting, but as MongoDB's bson is order sensitive, I need a way to create "key ordered documents", it's provided by bon.son.SON in pymongo.
[18:00:13] <mandark> _m: constructing a SON is easy with SON([(key1, value1), (key2, value2), and so on])
[18:00:41] <mandark> _m: but what about editing (adding a key value pair) in an existing SON object at a "not random" position ?
[18:07:58] <mandark> _m: I got an idea, we should build a son_manipulator that sort document keys :-X no more problem about field ordering ...
[18:14:43] <_m> mandark: A reasonable solution. Best of luck!
[18:15:13] <mandark> _m: Yes but as the SON object is not documented I don't know any way to implement this kind of son_manipulator ... :X
[19:41:26] <fitzagard> Any specific reason why the mongo-php-driver on 1.2.12 would throw a depreciated notice for MongoCursor::group() - Implicitly passing condition as $options will be removed in the future?
[19:43:31] <rnickb> how would you write this query using the c++ api? db.records.find({_id : { $gt : {userid: 1, timestamp : 0}, $lte : {userid: 1, timestamp: 100}}})
[20:15:18] <gustonegro> Hi, is there a reason why dot notation doesn't consider numbers to be indexs in arrays? (when upserting or inserting)
[20:45:37] <aboudreault> When we have many documents to get (say 100-1000), and have to do complex operation. (merging, stat). is it better to create *relatively big* mapredude fonctions, or would be better to fetch my document using a script on the server, then process things ?
[21:08:53] <_m> gustonegro: db.things.update({ _id : ObjectId("50512dd15e5110032d1fa00d") }, { $set : { 'content.foo' : [{}, {bar: 10}] } }, true)
[21:09:06] <_m> I can't manage to place the index as a number
[21:09:19] <_m> However manual shifts work properly
[21:13:09] <_m> Also see this ticket: https://jira.mongodb.org/browse/SERVER-2363
[21:25:17] <gustonegro> _m: I tried that too, but then it overwrites everything at "content.foo" ...which is not what I want if I have values in content.foo.0 and only want to update content.foo.1.bar
[21:43:32] <gustonegro> I'm guessing there was some logic as to why upserting/inserting with a dot notation content.foo.1.bar creates an Object at foo instead of an Array, no?
[21:44:22] <gustonegro> even better, I think, would be a more explicit dot notation with content.foo[1].bar
[21:57:28] <_m> Right. See the linked ticket. It's an open issue
[22:03:18] <gustonegro> _m: yeah, thanks. good to know.
[22:03:59] <gustonegro> also, is the current limitatios of dot notation a feature or bug?
[22:05:06] <_m> Good question. Would have to get one of the dev team to weigh in on that one.
[22:48:56] <aboudreault> When we have many documents to get (say 100-1000), and have to do complex operation. (merging, stat). is it better to create *relatively big* mapredude fonctions, or would be better to fetch my document using a script on the server, then process things ?
[22:58:42] <jiffe98> will it work to have 4 sets of machines, 2 of which have more disk space than the other two and also take advantage of that extra disk space?
[23:07:09] <_m> jiffe98: Mongo itself shouldn't care. Really depends on your architecture. In practice, I've found it easier to have a consistent master/slave machine sizes
[23:08:21] <jiffe98> well I have 2 machines in a mirror right now, and I'm looking to add 2 more machines as a second shard
[23:08:23] <_m> aboudreault: Depends on your resources and usage. I prefer to keep stats processing outside of Mongo in preference to StatsD, Hadoop, etc. as to limit performance implications.
[23:08:37] <jiffe98> I'm going to have more disk in these new machines though
[23:09:02] <_m> That said, it shouldn't matter *too much* if the machine is only there to process stats.
[23:09:19] <_m> Are they part of your replica set?
[23:09:41] <aboudreault> _m, in fact, you might be able to recommend me some way to work. If you have 2-5 minutes?
[23:09:58] <jiffe98> so as far as mongodb is concerned both machines are equal so usage-wise it will max out when both sides reach the disk space on the smaller set
[23:10:11] <_m> I'll try and chime in. Go ahead and detail your general use-case.
[23:10:25] <_m> jiffe98: Exacto.
[23:11:43] <aboudreault> I want to build something that each user has a kind of bucket and they can upload a document, which has no relation with any other. Each document represent something with date.That's why I'm interested in mongodb. The problem, and the why I'm hesitating to use postgresql, is because I would like to build statistic with these data. So postgreSQL queryability is awesome for that.
[23:12:30] <aboudreault> _m, but I thought that since there is no *performance guarantee* at all and that it is not a high critiria .... I could only use complex mapreduce fonctions
[23:13:08] <aboudreault> most of time, it should be easy and simple. but something (ie, with multiples documents... or even some day with multiples documents of multiple users) it might become harder.
[23:13:42] <aboudreault> you are talking about statD and hadoop. Would those be something I should take a look? Are they a replacement of what I want to push in mongodb, or a complement?
[23:14:50] <wzlwzl> i have 2 mongod +1 arb in a replica set.. i had data corruption… now both mongod are in STARTUP2 and I need to get out of this state to restore… how can I make one of them primary?
[23:25:47] <aboudreault> _m, still there?
[23:27:18] <_m> aboudreault: Yeah. Super busy though. Give me a few.
[23:27:40] <aboudreault> hey, no problem. I'm here for a while. no rush
[23:28:57] <_m> wzlwzl: Promoting one to master via the standard methods should work.
[23:29:04] <_m> *should*
[23:29:59] <_m> aboudreault: Depending on your language, an ORM would give you that functionality (joins, essentially)
[23:30:45] <aboudreault> but there is no orm for mongodb
[23:31:00] <_m> aboudreault: Hadoop is a great tool for deterministic stat processing. It's a map-reduce platform, stated basically.
[23:31:10] <_m> aboudreault: Which language?
[23:31:19] <aboudreault> I will use python, mostly sure.
[23:31:27] <aboudreault> and node.js maybe
[23:33:00] <_m> aboudreault: http://stackoverflow.com/questions/2781682/mongodb-orm-for-python
[23:33:28] <aboudreault> and this do map-reduce on the backend for sure?
[23:33:54] <_m> There's a discussion of some Python ORMs. I'm not familiar with python and mongo, so I haven't a recommendation
[23:34:29] <_m> aboudreault: http://hadoop.apache.org/#What+Is+Apache+Hadoop%3F for more information
[23:34:47] <aboudreault> _m, thanks
[23:34:50] <aboudreault> will read
[23:34:59] <_m> As for your document structure, that should be fairly simple
[23:35:41] <_m> A "users" collection with array field named "docs" with each document's information should work.
[23:36:11] <_m> We do something similar, with the actual files stored in S3 and references to those within an array in our user documents.
[23:36:14] <wzlwzl> _m… promoting via stepDown doesnt work since neither is a primary
[23:36:52] <aboudreault> _m, ok
[23:37:21] <_m> wzlwzl: Can you share your rs.conf()
[23:39:09] <wzlwzl> i tried to be smart and remove #2 and arb, but now i cant readd them
[23:39:11] <wzlwzl> http://pastebin.com/7hhUQMKE
[23:40:52] <_m> wzlwzl: Try using rs.reconfig(conf)
[23:41:07] <wzlwzl> "errmsg" : "replSetReconfig command must be sent to the current replica set primary.",
[23:41:24] <_m> http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary
[23:41:26] <wzlwzl> using force on that worked
[23:41:29] <wzlwzl> with reconfig
[23:41:36] <_m> Sweet
[23:41:36] <wzlwzl> the new config is there
[23:41:42] <_m> Awesome. Good luck!
[23:41:44] <wzlwzl> but neither is primary
[23:41:58] <wzlwzl> a, b, and arb are all STARTUP2
[23:43:49] <wzlwzl> i tried to make a have priority: 2
[23:43:52] <wzlwzl> and during reconfig:
[23:43:53] <wzlwzl> "assertion" : "initiation and reconfiguration of a replica set must be sent to a node that can become primary"
[23:44:37] <wzlwzl> not sure how to get the a, b, and arbiter out of STARTUP2
[23:44:45] <wzlwzl> they're like.. each waiting for the other
[23:44:56] <wzlwzl> there was nothing in the datadir on any of them
[23:45:00] <wzlwzl> when i started them up
[23:46:34] <_m> Have you removed all local.* stuff from the arbiter?
[23:46:48] <_m> Reference: db.adminCommand({replSetStepDown:1000000, force:1})
[23:46:50] <_m> Errr
[23:46:57] <_m> https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/NZwY7e54Yyw
[23:47:51] <_m> wzlwzl: I've never run into this before. Will send our sysops guy a message and see if he can shed some light.
[23:48:25] <_m> Also, you could attempt to shut down the "slave" and see if election happens, then bring the slave back up
[23:49:38] <R-66Y> is there a native mongoDB shell way to figure out how "long" an associative array is?
[23:49:46] <R-66Y> that is to say how many fields it has
[23:49:50] <_m> $size
[23:50:04] <_m> See the query and selector documentation
[23:50:31] <tomlikestorock> anyone know what this error means? [TTLMonitor] problem detected during query over admin.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }
[23:50:45] <tomlikestorock> or this? [TTLMonitor] ERROR: error processing ttl for db: admin 10065 invalid parameter: expected an object ()
[23:51:39] <R-66Y> _m: i think that only works for non-associative arrays
[23:52:07] <tomlikestorock> I'm trying to add a new box to my replset, and I keep also seeing these in the log: [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:admin.system.indexes query:{ expireAfterSeconds: { $exists: true } }
[23:54:30] <wzlwzl> _m: tried that.. none of the boxes in the replica set are leaving STARTUP2