PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 19th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:04] <phrearch> hello
[00:00:18] <phrearch> im trying to figure out the best way to update a counter field on a collection
[00:00:34] <phrearch> like i have an order field, and after i remove 1 item, i need to rebuild the order index again
[00:00:51] <phrearch> is there a way to do this in one query?
[00:01:08] <phrearch> like get sorted results, and restart counting from 0 to ...
[00:01:15] <phrearch> $setting one field
[00:04:10] <phrearch> http://paste.kde.org/pbxdgrru6
[00:04:40] <phrearch> something like this, but i have to figure out how to wait async on all updates, or do all updates in one query(which i would prefer if possible)
[00:32:06] <phrearch> hm this kinda works
[00:32:07] <phrearch> http://paste.kde.org/pxdkedrbx
[00:47:25] <__mark__> how do I issue a "slaveOk" command via runCommand or $cmd.findOne({})? I keep getting "no such cmd: slaveOk". The shell command is "rs.slaveOk()" but I'm a database-driver author, and I need to figure out how to do this via the wire spec.
[00:49:06] <__mark__> for example, db.$cmd.findOne({dropDatabase: 1}) is synonymous with db.dropDatabase()
[00:49:23] <__mark__> in the shell, but how do I do slaveOk?
[03:53:01] <chovy> does anyone know of a way to run data migrations on a db? Like add some default attributes to existing documents?
[04:03:31] <ranman> anyone here an AWS expert, with EBS? is it better to ask for a few tiny volumes or one larger one?
[04:03:42] <ranman> chovy: I typically use map reduce for that stuff
[04:03:50] <ranman> chovy: you can also do massive $updates
[04:03:56] <ranman> *updates
[04:04:32] <ranman> update({}, {$set: {'default': 'default'}}, {'$multi': true})
[04:04:34] <ranman> something like that
[04:05:43] <chovy> ranman: is there any tools/npm modules for this? All I could find was db versioning stuff. I'm also using Mongoose too.
[04:06:23] <ranman> chovy: you mean a stored migration? I typically write that in a script and throw it in a folder like /migrations, you could do all of this with mongoose
[04:07:48] <chovy> i don't know what a stored migration is
[06:11:31] <netQt> Hi all. I'm using nodejs to insert data int mongo(no external lib like mongoose). All my integers are being saved as double, how do I set type to int? Now I'm using mongo's Long, but it stores my numbers as NumberLong object.
[06:12:19] <chovy> Integer?
[06:12:39] <netQt> Yes, I want type to be int, not double
[06:13:03] <chovy> netQt: yeah, is there a data type Int(eger)?
[06:13:24] <netQt> now it stores like this: NumberLong(319901653)
[06:14:10] <chovy> Integer
[06:14:19] <chovy> http://www.tutorialspoint.com/mongodb/mongodb_datatype.htm
[06:27:18] <netQt> Yea, I see, I need to find way to specify type from node
[06:45:16] <ranman> netQt: http://bsonspec.org/spec.html
[06:45:42] <spychalski> so i have an ObjectId type in my schema, is there a quick way of accessing that type's attributes?
[06:45:50] <ranman> netQt: http://docs.mongodb.org/manual/core/shell-types/
[06:46:11] <ranman> netQt: "By default, the mongo shell treats all numbers as floating-point values. The mongo shell provides the NumberLong() wrapper to handle 64-bit integers."
[06:46:36] <ranman> NumberLong is probably what you want
[06:46:46] <ranman> unless you really care about space and are working with small integers
[06:46:53] <ranman> in which case I guess you could use NumberInt
[06:48:32] <ranman> spychalski: oid = db.test.findOne()._id; oid.getTimeStamp() ?
[06:49:09] <ranman> spychalski: I'm not sure I understand what you're asking
[06:49:54] <spychalski> i'll try again. my schema has a category with type ObjectId. category is another schema which has, say, a name. i want to be able to do 'mainschema.category.name'
[06:50:31] <spychalski> atm i need to do 2 queries.
[06:50:42] <netQt> I can use NumberLong from mongo shell, and it works perfectly, but I can't find that in the node's mongo lib. It has Long, but it doesn't store correctly.
[06:51:31] <spychalski> netQt: i THINK just long should do. i'm not sure though
[06:51:53] <netQt> mongo.Long(new Date().getTime()) this breakes the timestamp, it doesn't store correctly
[06:54:09] <spychalski> long should be able to store to 2147483647
[06:56:01] <netQt> that's why it cannot store my timestamp, I need to somehow specify NumberLong that works in mongos shell
[06:56:36] <spychalski> are you storing microseconds or something? sorry i wasn't here earlier
[06:56:52] <ranman> netQt: have you considered storing a timestamp instead?
[06:57:08] <netQt> here is my data 1400480303748
[06:57:20] <ranman> netQt: considering the bson spec supports timestamps...
[06:57:31] <ranman> you could just store that new Date()
[06:57:33] <ranman> and be done
[06:57:36] <spychalski> oh, right
[06:58:02] <ranman> spychalski: that's because mongodb is not a relational database, if you want to fetch two documents, you do two queries, there is a concept of a DBRef but I would avoid that
[06:58:30] <ranman> spychalski: have you considered embedding the information you do need in that document along with the objectid reference?
[06:58:39] <spychalski> ranman: should I instead set my type to the category schema itself then? i had issues with uniqueness doing that earlier
[06:59:33] <ranman> spychalski: you're being a little to data specific for me to give a recommendation sorry :(, could you throw it in a gist/pastebin and maybe explain what your end goal is as well? just don't want to give you bad advice
[06:59:40] <ranman> s/to/too
[07:00:02] <spychalski> yeah, sure
[07:00:04] <ranman> an example document of each type and then a query would work
[07:07:15] <ichetandhembre> hi
[07:07:34] <ichetandhembre> i want help for adding host in MMS
[07:07:54] <ichetandhembre> actually my mongodb server is hosted on ec2 instance
[07:08:18] <ichetandhembre> how can i add this instance as host on MMS ?
[07:08:28] <ichetandhembre> any one want to help ?
[07:09:27] <ranman> ichetandhembre: maybe read the docs? http://mms.mongodb.com/help/tutorial/install-monitoring-agent/
[07:09:32] <ranman> do you have a specific problem?
[07:09:38] <joannac> ranman: he opened a ticket
[07:11:01] <ichetandhembre> i installed mongo monitoring agent
[07:11:06] <ichetandhembre> it is working fine
[07:11:58] <ichetandhembre> but adding mongodb server as host is quit confusing for me
[07:12:30] <ichetandhembre> mongo monitoring agent has to connect to my mongodb server right
[07:12:56] <joannac> right
[07:13:09] <joannac> reactivate your hosts
[07:13:58] <ichetandhembre> so i can allow security group of agent to access my monogodb port on server
[07:14:07] <ichetandhembre> is that right way ?
[07:15:30] <ichetandhembre> my monitoring agent is hosted on another ec2 instance than my server
[07:19:39] <spychalski> ranman: apparently my code was wrong and thats why it wasn't working. sorry =(
[07:20:00] <ranman> spychalski: it's ok, that's like... almost always the case when I write code :D
[07:20:33] <ranman> spychalski: I purchased a rubber duck that I explain all of my problems to, sometimes that helps, sometimes your coworkers just mutter about the crazy guy talking to the rubber duck
[07:20:35] <spychalski> I was using mongoose, creating a schema (category) and later referencing it as a type in a 2nd model. it should work
[07:20:41] <spychalski> HAHA
[07:20:56] <spychalski> really makes you wonder who's the crazy guy
[07:21:01] <joannac> ranman: should've stolen one of shaun's
[07:21:15] <ranman> joannac: I think I gave him that one...
[07:38:33] <ichetandhembre> in MMS which graph i should keep eye on to make sure every thing is working fine .. what these graph means ?
[07:42:23] <ichetandhembre> joannac: in MMS which graph i should keep eye on to make sure every thing is working fine .. what these graph means ? is there any doc about it ?
[07:43:57] <ranman> ichetandhembre: I typically watch for page faults and lock %
[07:45:16] <ranman> in clusters I typically watch memory usage, opcounters, lock %, queues, background flush average, and replication stats
[07:45:41] <ranman> repl lag is a good things to watch in clusters
[07:47:00] <ichetandhembre> ranman: thanks i would like to know more about these graph .. is there any doc explaining which and why graph is important ? srry for such naive question i am newbie in mongodb management stuff
[07:48:12] <ranman> ichetandhembre: this video might help http://www.mongodb.com/presentations/performance-tuning-and-monitoring-using-mms
[07:48:40] <ranman> ichetandhembre: mongolab has a good article: http://blog.mongolab.com/2013/12/tuning-mongodb-performance-with-mms/
[07:48:54] <ichetandhembre> ranman: thanks
[08:04:19] <spychalski> there you go, i'm getting the E11000 error again =(
[08:04:49] <spychalski> I don't understand unique in mongodb
[08:05:27] <spychalski> if i set unique in a collection, every other collection that references that first also become unique
[08:05:38] <spychalski> is there a way to make this not happen?
[08:06:30] <Derick> what you say is not true. There are no references in MongoDB
[08:06:44] <Derick> uniqueness in one collection does not influence another collection
[08:07:32] <spychalski> ok, help me out. i made a schema called cat { name: { type: String, unique: true } }
[08:08:02] <Derick> you can't make a schema
[08:08:15] <Derick> there are no such things. In a collection, each document can look different.
[08:08:59] <spychalski> perhaps i should take this to mongoose but nobody there answers
[08:09:14] <spychalski> because i think i can make this work in mongo but not using mongoose somehow
[08:09:20] <Derick> with mongoose you can make a schema, with mongodb you can't
[08:09:33] <Derick> but I know little about mongoose (ie: nothing)
[08:10:46] <spychalski> i could try to get info with the mongo console
[08:16:20] <spychalski> https://gist.github.com/spychalski/f23430cb011e6560b9b0
[08:17:07] <Derick> db.items.find( { 'HQ' } ) will return something then already
[08:17:10] <Derick> try it
[08:17:25] <spychalski> yes. but i want multiple items to have the same category
[08:17:40] <Derick> oh
[08:17:46] <Derick> what is "category" there?
[08:17:57] <spychalski> just a simple name field
[08:18:09] <Derick> but that's not what:
[08:18:12] <Derick> db.categories.ensureIndex({ name: 1 }, { background: true, unique: true })
[08:18:14] <Derick> creates an index on
[08:18:36] <Derick> as that creates an index on the categories collection, and not items
[08:18:37] <Zelest> what's the best way of making _id an int and have it auto-increment, can this be done automagically?
[08:18:37] <spychalski> see, thats what i assumed. like you can't create a category with the same name twice.
[08:18:54] <Derick> Zelest: well, depends on whether you want concurrency ;-)
[08:19:18] <Derick> spychalski: *that* is what that index enforces, but it has nothing to do with your insert to your items collection
[08:19:38] <Derick> Zelest: http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
[08:19:43] <Zelest> ah, thanks
[08:19:58] <spychalski> yes but since items references categories, it becomes unique too, for some reason.
[08:20:06] <Zelest> Derick, oh, also, do you have any words on the "no candidate server" issue I had? like, it stays like that until I restart php-fpm..
[08:20:08] <spychalski> maybe it's a mongoose problem.
[08:20:16] <Zelest> Derick, any way to "reset" php-fpm from php, without restarting it?
[08:21:43] <Derick> Zelest: reset how? kill all connections?
[08:21:49] <Derick> spychalski: no
[08:21:59] <Derick> spychalski: mongodb does not do that automatically
[08:22:06] <Zelest> Derick, whatever it is php-fpm does, yeah.. "re-scan which master to use"
[08:22:13] <Zelest> php-fpm restart does*
[08:22:48] <Derick> Zelest: one sec
[08:23:19] <Derick> http://docs.php.net/manual/en/mongoclient.close.php with "TRUE" as argument
[08:24:12] <spychalski> Derick: this is exactly my issue: http://stackoverflow.com/a/12512934
[08:24:27] <spychalski> it seems just storing the ObjectId of the category would be the best way of doing things.
[08:24:38] <Zelest> Derick, ah, awesome! thanks
[08:24:59] <Zelest> Derick, slowly migrating our entire company to mongodb now btw :D
[08:25:28] <Derick> Zelest: but the driver will find a new primary on its own, it just takes a while (and has to do it for each php-fpm process)
[08:25:40] <Zelest> ah
[08:25:52] <Zelest> that "while", is that configureable somewhere?
[08:26:10] <Derick> yes
[08:26:47] <Derick> http://docs.php.net/manual/en/mongo.configuration.php#ini.mongo.is-master-interval
[08:28:07] <Zelest> Aah, lovely
[08:33:44] <Zelest> silly question, I created a function in the mongoshell.. can I use this function everywhere now? as in, are functions like "stored proceedures" or are they just local inside my shell?
[08:42:48] <Derick> Zelest: just local inside your shell
[08:43:33] <Zelest> Derick, how can I use that function you linked from php? (getNextSequence) ?
[08:43:48] <Zelest> or should/can I translate that one to php?
[08:43:50] <Derick> hmm
[08:43:51] <Derick> no
[08:43:56] <Derick> it needs to run on the server
[08:44:08] <Zelest> but...
[08:44:10] <Zelest> really?
[08:44:19] <Zelest> (seeing it uses findAndModify)
[08:44:22] <Derick> there is a way, I have done this before
[08:44:38] <Zelest> if two nodes hit it exactly the same, findAndModify will lock and return two different values?
[08:44:45] <Derick> yes, that's the plan
[08:44:52] <Zelest> but won't that happen even inside php?
[08:44:54] <Derick> it mentions this on the doc page, with upsert somewhere
[08:45:04] <Derick> one sec
[08:45:06] <Zelest> kk
[08:45:31] <Derick> sorry, this looks like you can do in PHP itself easily
[08:46:23] <Zelest> mhm
[08:46:55] <Zelest> the wrong way to do it is "find the highest _id from collection, increase it with 1.. and use that"
[08:46:59] <Zelest> that won't be concurrent..
[08:47:05] <Derick> right
[08:47:11] <Zelest> using a separate counters collection and findAndModify solves it though
[08:47:11] <Derick> findAndModify is atomic though
[08:47:16] <Zelest> mhm
[08:47:16] <Derick> yes
[08:47:20] <Zelest> thanks :)
[08:49:01] <freeMind> hi everybody
[08:49:23] <freeMind> i want to execute this commande from my c++ application
[08:49:25] <freeMind> db.collection.ensureIndex({ "$**": "text" }, { name:"TextIndex" }
[08:49:25] <freeMind> )
[08:50:29] <freeMind> i tried c.ensureIndex("tutorial.testBD",mongo::fromjson("{\"$**\": \"text\", \"name\":\"textIndex\"}"));
[08:50:43] <freeMind> but it didn't work
[09:01:58] <Zelest> { "_id" : "clicks", "seq" : { "$inc" : NumberLong(1) } }
[09:02:00] <Zelest> wat?
[09:02:14] <Derick> you did something wrong :-)
[09:02:38] <Zelest> http://pastie.org/private/2eu8wydwhgjizxmgmlbgig
[09:02:42] <Zelest> what am I missing? :o
[09:02:52] <Derick> ['seq' => ['$inc' => 1]],
[09:02:53] <Derick> should be:
[09:03:00] <Derick> ['$inc' => ['seq' => 1]],
[09:03:04] <Zelest> oooh
[09:03:07] <Zelest> of course
[09:03:09] <Zelest> haha
[09:03:15] <Zelest> monday morning ftw
[09:04:07] <Zelest> there we go... thanks :D
[09:30:51] <Azitrex> in NoSql databases we dont have "LIMIT" option for get items limited by quantity ?
[09:31:42] <jdj_dk> Azitrex: we have limit?
[09:31:47] <jdj_dk> {limit: limit}
[09:31:54] <jdj_dk> or am I misunderstanding your question?
[09:32:16] <jdj_dk> Docs.find({}, {limit: limit})
[09:38:18] <Azitrex> is true , i find this and worked as fine , but today we have a new NoSql DB (Perst) with JSQL !! http://www.garret.ru/jsql/readme.htm i dont know how can i to limited result set !
[09:39:14] <rspijker> how is this an appropriate question for #mongodb?
[09:39:23] <jdj_dk> Azitrex well don't they have a channel for support =)
[09:39:44] <jdj_dk> not that we're not interested in helping.. But the change of us knowing the answer is very small
[09:39:48] <jdj_dk> try stack overflow?
[09:40:36] <Azitrex> yes i know , i think if this is a nosql familiar and stractures is same as mongo
[09:41:01] <Azitrex> i must be find in other places , thanks guys
[09:41:25] <rspijker> from a quick look at the grammar, it doesn’t look like there is a limit in there
[09:41:32] <jdj_dk> nope =/
[09:41:57] <rspijker> howerver..
[09:42:01] <rspijker> it does say something about limit
[09:42:05] <rspijker> at the bottom of the page
[09:42:37] <rspijker> which does exactly what you expect
[09:42:50] <rspijker> but it appears to be in-code, instead of part of the query language
[10:13:46] <kas84> hi
[10:14:16] <kas84> is there a way to set a different path for storage and journaling in mongo?
[10:42:30] <rspijker> kas84: not as such… You can mount a different volume on the journal dir though
[10:43:01] <kas84> just a ln -s to “journal” directory under the dbpath would do it, right?
[10:43:12] <rspijker> should work as well, yeh
[10:43:19] <kas84> cool
[10:43:21] <kas84> thanks rspijker
[10:43:23] <rspijker> np
[10:43:27] <rspijker> lunch! :)
[10:45:44] <kas84> bon apetit!
[10:48:34] <Azitrex> thanks
[14:16:48] <[EddyF]_> is anyone looking for work in London,UK? http://careers.stackoverflow.com/jobs/50317/php-developer-rocabee?a=ZmJB2vAI&searchTerm=PHP
[14:18:24] <ekristen> good morning
[14:18:31] <ekristen> sort seems to be killing my queries
[14:18:38] <ekristen> I’m using 2.4.8
[14:18:45] <rspijker> sorting’s hard yo…
[14:19:27] <rspijker> perhaps some more specifics about your collection, the documents inside and the query you are running
[14:19:35] <rspijker> as well as the indices you have on said collection
[14:19:36] <ekristen> sure
[14:20:04] <ekristen> rspijker: 6 fields
[14:20:19] <ekristen> two timestamp related fields, all in millsecond
[14:20:35] <ekristen> 1 field with an ObjectId value
[14:21:06] <ekristen> .find({ state : 'new', ts : { $lte : 1400501853000 } }).sort({ recordedAt : 1 }).limit(100)
[14:21:24] <ekristen> I’ve tried the following indexes
[14:21:49] <ekristen> recordedAt, state, ts — state, ts, recordedAt — state, ts
[14:22:03] <ekristen> I also providing hinting on the query each time
[14:22:13] <ekristen> i am*
[14:22:24] <rspijker> well.. order is fairly important in your indices
[14:22:35] <rspijker> try: state, ts, recordedAt
[14:22:44] <rspijker> ah, you had that already
[14:22:49] <rspijker> what did the explain say there?
[14:23:47] <ekristen> one sec
[14:24:22] <ekristen> hrm, that one is running fast now, wth —-
[14:24:31] <ekristen> oh wiat
[14:24:32] <ekristen> no sort
[14:24:34] <ekristen> one sec
[14:25:23] <ekristen> btw this is on a server with 8 cores and 30gb of memory too
[14:25:35] <ekristen> about 17 million records that have state == new
[14:27:05] <ekristen> rspijker: http://pastebin.com/G0hb32dL
[14:28:03] <rspijker> that seems different from what you told me
[14:28:10] <rspijker> can you also pastebin the exact query you ran?
[14:28:21] <rspijker> also… is recordedAt an array?
[14:28:36] <ekristen> no recordedAt is a timestamp in milliseconds
[14:28:42] <ekristen> .find({ state : 'new', ts : { $lte : 1400501853000 } }).sort({recordedAt:1}).hint({ state: 1, ts: 1, recordedAt: 1 }).limit(100).explain()
[14:28:52] <ekristen> that is the query I ran
[14:30:37] <rspijker> from what you told me about the index you are using, scanAndOrder should be false…
[14:32:51] <ekristen> all I did was ensureIndex({state: 1, ts: 1, recordedAt: 1})
[14:33:59] <rspijker> can you pastebin the output of getIndexes() ?
[14:34:06] <rspijker> on your collection
[14:38:45] <ekristen> sure
[14:38:45] <ekristen> one sec
[14:40:09] <ekristen> rspijker: http://pastebin.com/sEhEAApb
[14:42:29] <rspijker> what does .count({state: ‘new’, ts: {$lte:1400501853000}}) give you?
[14:46:49] <rspijker> also, it shouldn’t matter unless you have some weird definitions in your active shell, but try wrapping fields names in quotes
[14:47:33] <rspijker> so… .find({“state”:”new”, “ts”:{$lte : 1400501853000}}).sort({“recordedAt”:1}).limit(100)
[14:47:57] <ekristen> ok one sec
[14:48:44] <ekristen> 16.9 million for the count
[14:51:18] <rspijker> hmmm, so apparently it’s the sort that it can’t do with the index for some reason. Which brings the limit out of the sort as well and you’re sorting 17million records instead of 100
[14:54:02] <ekristen> right
[14:57:02] <ekristen> limitation of 2.4.8 maybe?
[14:59:34] <ekristen> sp rspijker just SOL?
[15:00:00] <rspijker> this should just work….
[15:00:07] <rspijker> I don’t see any reason why it shouldn't
[15:03:10] <ekristen> rspijker: so I just added ts and state to the sort
[15:03:12] <ekristen> and that works
[15:03:30] <rspijker> :/
[15:03:43] <rspijker> that’s fairly weird…
[15:05:41] <rspijker> ekristen: ugh, it’s because the query on ts is not an equality comparison but a range
[15:05:48] <rspijker> apparently, that doesn't work
[15:06:14] <ekristen> oh
[15:06:15] <rspijker> so… just adding ts to the sort should work as well
[15:07:00] <ekristen> yeah you are right
[15:07:05] <rspijker> from the docs:
[15:07:06] <ekristen> adding ts
[15:07:11] <rspijker> “The sort document can be a subset of a compound index that does not start from the beginning of the index. For instance, { c: 1 } is a subset of the index { a: 1, b: 1, c: 1, d: 1 } that omits the preceding index fields a and b. MongoDB can use the index efficiently if the query document includes all the preceding fields of the index, in this case a and b, in equality conditions.”
[15:07:13] <ekristen> nscanned 100, nscanned objects 100
[15:07:16] <ekristen> millis: 0
[15:13:04] <ekristen> rspijker: thanks!
[15:13:15] <rspijker> np
[15:38:18] <jp-work> hi guys
[15:38:27] <jp-work> https://jira.mongodb.org/browse/SERVER-13752
[15:38:46] <jp-work> is there a solution for this or do I need to do some kind of work around it?
[15:38:59] <jp-work> seems mongo doesn’t like to sort an empty result set?
[15:39:50] <cheeser> it says "fixed" and marked for 2.6.2
[15:40:05] <jp-work> ok
[15:40:13] <jp-work> for the mean time, is there a work around?
[15:40:14] <fewt> Hi everyone
[15:40:20] <cheeser> dunno
[15:40:35] <rspijker> doesn’t look like the problem is with sorting an empty result set jp-work....
[15:40:41] <fewt> Quick question, is the 2.4.x OSS edition still being updated now that 2.6 is available?
[15:40:51] <jp-work> empty $in query?
[15:40:59] <fewt> can’t find any docs that say yes or no
[15:41:02] <dgarstang> Anyone awake? I'm trying to deploy 2.6. The docs say in 2.6 the config switched to yaml but it's backwards compatible. This seems to be untrue. Mongo complains if the config file is not in yaml format.
[15:41:05] <cheeser> fewt: the what?
[15:41:24] <rspijker> jp-work: yes. That’s something quite different though
[15:41:26] <cheeser> oh, is 2.4.x still under development? 2.4.10 was just released...
[15:41:27] <fewt> mongodb 2.4.x, wondering if there will be a 2.4.11
[15:41:36] <cheeser> "maybe"
[15:41:40] <fewt> cheeser: ahh
[15:43:22] <fewt> cheeser: that’s more info than I’ve seen on mongodb.org lol
[15:43:51] <dgarstang> wonder why the docs say 2.6 is backards compat with the config file when it's not.
[15:44:49] <rspijker> dgarstang: it’s not?
[15:45:21] <dgarstang> rspijker: Correct. If I deploy 2.6 with non yaml config file format, it complains. For example it complains about the 'nojournal' entry and then a few others
[15:49:40] <rspijker> dgarstang: I don’t know about all the options. The general format still works though. I’m not having any issues with a test deployment. Mind you, I’m not using the nojournal option.
[15:50:06] <rspijker> but if some of the options aren’t working, that does mean it’s not entirely backwards compatible
[16:08:28] <dgarstang> rspijker: smallfiles also causes an error
[16:08:41] <rspijker> dgarstang: that works fine for me...
[16:09:08] <dgarstang> rspijker: 'rest' too. *sigh*
[16:09:35] <rspijker> well, like I said, smallfiles works fine here… Are you sure you don’t have other errors?
[16:09:42] <rspijker> how are you specifying the cfg file?
[16:10:07] <dgarstang> rspijker: Lemme see. one sec
[16:16:09] <dgarstang> rspijker: seems I cant reproduce the error. grrr.
[16:16:18] <rspijker> FIXED!
[16:28:23] <dgarstang> rspijker: no, wait!
[17:09:00] <dgarstang> Why? .... "Starting mongos: Error parsing INI config file: unknown option nojournal"
[17:09:50] <dgarstang> And googling "Error parsing INI config file: unknown option nojournal" gets me nothing AT ALL
[17:17:18] <dgarstang> Something is very screwed up with mongos. I modified /etc/mongodb.conf to the yaml file format and now I'm getting "Unrecognized option: storage.journal.enabled".
[17:22:01] <kongthap> hi, i'm trying to setup authentication, i can use db.auth(), i can display admin.system.users content, but why i cannot use "show collections" it returns error (more: http://pastie.org/9190172), windows 7, mongodb 2.6.1
[17:33:39] <dwstevens> blizzow: > db.runCommand({connectionStatus: 1})
[17:33:39] <dwstevens> { "authInfo" : { "authenticatedUsers" : [ ] }, "ok" : 1 }
[17:34:53] <dwstevens> if i want to update some statistics fields on a document after save, can I do so with a post save hook? Specifically I want to summarize one of the arrays in the model
[17:35:15] <dwstevens> would saving a doc again cause the post save hook to be recursively called?
[17:59:31] <dgarstang> quiet in here. :(
[17:59:44] <rafaelhbarros> mysql > mongo
[17:59:54] <rafaelhbarros> let's see if this wakes people up
[18:00:45] <rafaelhbarros> python -c "print 'mysql' > 'mongo'"
[18:00:48] <rafaelhbarros> does return true
[18:05:55] <dgarstang> Am I going to need separate config files for mongod and mongos in 2.6?
[18:09:05] <dgarstang> sigh
[18:21:57] <kongthap> i'm trying to setup authentication, i can use db.auth(), i can display admin.system.users content, but why i cannot use "show collections" it returns error (more: http://pastie.org/9190172), windows 7, mongodb 2.6.1
[19:56:37] <xofer_> hello, i am trying to add a 2.4.10 node to a replicaset with 2 x 2.4.9 nodes, and it's unable to sync (falls further and further behind) -- i've pretty much ruled out hardware and network -- any ideas?
[20:03:52] <xofer_> the other odd thing about the 2.4.10 is that its cpu usage is low, around 100% on a 16-core machine
[20:04:02] <xofer_> while replicating
[20:04:35] <xofer_> with 2.4.9, it was usually much higher
[20:04:48] <xofer_> not sure if cause or effect
[20:06:17] <federated_life> xofer_: you sure its not just your box having cleared out its FS cache when you upgraded?
[20:06:42] <xofer_> the box is a newly created ec2 node
[20:07:01] <xofer_> but i've done this several times in the past
[20:07:07] <xofer_> should be able to join, no?
[20:08:27] <xofer_> the inital sync took about 2hours
[20:08:36] <xofer_> but now it's falling more and more behind
[20:09:12] <xofer_> in contrast, we rebooted a dead secondary just last week which was down for 2h and it caught up
[20:09:27] <xofer_> federated_life: anything I should check?
[20:10:16] <federated_life> xofer_: make sure you dont have errors
[20:10:20] <federated_life> do rs.status()
[20:10:27] <federated_life> sometimes there are replication errors when upgrading
[20:10:40] <federated_life> but do rs.status on ALL members
[20:10:46] <federated_life> sometimes errors dont show on all
[20:11:14] <federated_life> otherwise, you can play with journalCommitInterval and syncDelay to reduce your IO wait and flushing
[20:12:01] <xofer_> federated_life: i don't see anything strange in status()
[20:12:11] <xofer_> the node in question is listed as STARTUP2
[20:12:22] <xofer_> it's using SSDs so very little i/o wait
[20:12:36] <federated_life> xofer_: yea, thats one the 'newer' status's when its starting up…whats your network throughput like?
[20:12:48] <xofer_> should be quite fast
[20:12:59] <federated_life> also, something about 2.2+ is that it only clones from the primary…in 1.8, you could have it pull the initial sync from secondaries
[20:13:19] <xofer_> (there is some mystery as to wheter we're getting "advanced networking" in aws)
[20:13:20] <federated_life> xofer_: check charts for correlation
[20:13:25] <jiffe98> I have a replica set with 3 nodes and I need to take one of them offline but when I do that, the 3rd node which is primary turns into a secondary and there's no primary nodes
[20:13:31] <jiffe98> any way to keep that one running?
[20:13:44] <federated_life> jiffe98: pastebin your rs.conf()
[20:14:13] <xofer_> federated_life: the thing is the only difference here is the new version
[20:14:24] <xofer_> we have added nodes without any problem quite recently
[20:14:29] <MacWinner> is it possible to just called .update() on results from .find() eg… db.testdb.find().update(…)
[20:15:10] <jiffe98> federated_life: http://nsab.us/public/mongodb
[20:15:15] <federated_life> xofer_: its unlikely that such a minor revision did anything unless you see an error in rs.status or the logs. you can increase verbosity of the logs .. db.adminCommand({ set : paramter : 1 , logLevel : 3 })
[20:15:20] <federated_life> *parameter
[20:15:36] <federated_life> jiffe98: are you on 2.0 ?
[20:15:54] <federated_life> jiffe98: pastebin your rs.status() too
[20:15:54] <jiffe98> v2.4.8
[20:16:56] <dgarstang> okiday, what causes "about to fork child process, waiting until server is ready for connections" when starting mongod?
[20:17:37] <xofer_> federated_life: i see it's pushing a lot of ops: replSet initialSyncOplogApplication applied 34405073 operations
[20:17:40] <jiffe98> federated_life: http://nsab.us/public/mongodb2
[20:18:08] <federated_life> jiffe98: two of your nodes are down
[20:18:22] <federated_life> you need a majority…so, fix your other two nodes first
[20:18:22] <xofer_> federated_life: we are coming off our peak time, so perhaps it will improve
[20:18:40] <federated_life> xofer_: whats your sync delay set to on the primary?
[20:18:49] <jiffe98> federated_life: that's what I said I needed to do, I need to take two of them down for repair and leave the 3rd running
[20:18:51] <federated_life> db.adminCommand({ getParameter : 1 , syncdelay : 1 })
[20:19:40] <xofer_> federated_life: ty for the command :)
[20:19:41] <xofer_> { "syncdelay" : 60, "ok" : 1 }
[20:20:33] <jiffe98> reading back I didn't say that, I'm sorry, I meant I needed to take two of them offline, one was already offline so I needed to take one more offline
[20:20:47] <dgarstang> omg, this is so confusing. What config files do mongod and mongos use? mongodb.conf? mongod.conf? WHich one uses which!!?!?
[20:21:40] <jiffe98> the reason I need to take two of them offline is because one of them is corrupt so I need to sync the other secondary to it
[20:22:03] <federated_life> xofer_: change your syncdelay to 5 seconds
[20:22:20] <xofer_> federated_life: even though "Do not set this value on production systems"
[20:22:21] <xofer_> ?
[20:22:28] <xofer_> :)
[20:22:33] <federated_life> xofer_: I totally change that all the time
[20:22:55] <xofer_> will you get fired if you lose the data? :)
[20:23:01] <federated_life> in older versions, like 1.8 and 2.0, you'd have to step it down from 60 to like 30 to 15 to 5
[20:23:02] <dgarstang> these docs http://docs.mongodb.org/manual/reference/configuration-options/ don't say which process uses which config file....
[20:23:07] <federated_life> because it would throw bus errors
[20:23:25] <xofer_> can i do that on the secondary only?
[20:23:33] <federated_life> xofer_: you need to do it on the primary
[20:23:50] <federated_life> xofer_: or…you can change which member you are receiving replication events form
[20:24:07] <xofer_> federated_life: that sounds like a good idea to try
[20:24:19] <xofer_> i am syncing to primary which gets a lot of writes
[20:24:38] <jiffe98> I set the votes on those two to 0 and that seems to have worked
[20:24:49] <dgarstang> What config files do mongos and mongod use? 'man mongos' and 'man mongod' also don't specify.
[20:25:00] <emgram> hi, I have a question about how to make a certain query efficiently. is it appropriate to ask that here?
[20:25:01] <xofer_> dgarstang: what OS?
[20:25:10] <federated_life> xofer_: http://docs.mongodb.org/manual/tutorial/configure-replica-set-secondary-sync-target/
[20:25:11] <dgarstang> xofer_: CentOS
[20:25:32] <xofer_> dgarstang: ps ax |grep mongo
[20:25:36] <xofer_> might be in there
[20:25:47] <federated_life> jiffe98: you need to fix your replica set…you can also rs.reconfig( {},{force:true})
[20:26:04] <federated_life> emgram: do an explain on your query
[20:26:04] <dgarstang> xofer_: I'm installing this with a chef cookbook which is having issues, so I'd rather not reply on what's deployed. it might be wrong. it's written for 2.4 and stuff changed in 2.6
[20:26:12] <federated_life> and pastebin the results
[20:26:41] <dgarstang> now, if there was just some docs, that you know, said what config file what processes used, that would be a helpful starting point
[20:26:43] <xofer_> dgarstang: well on my systems (ubuntu) the config path is specified in the command
[20:26:48] <xofer_> which i can find w/ ps
[20:26:50] <federated_life> dgarstang: there is only ONE config file for mongo
[20:27:00] <dgarstang> xofer_: Mine too, but I don't trust it
[20:27:04] <xofer_> dgarstang: i think it depends
[20:27:19] <xofer_> is there something like /etc/defaults/mongodb ?
[20:27:25] <dgarstang> federated_life: really? because certain options like smallfiles, nojournal mongos complains about
[20:27:27] <federated_life> dgarstang: you set the config file location when you startup mongod
[20:27:41] <federated_life> dgarstang: those are parameters that you put in your config file
[20:27:43] <xofer_> by *you* he means whatever started it
[20:27:54] <dgarstang> federated_life: i've seen referneces to mongodb.conf, mongod.conf at least
[20:28:04] <federated_life> you can call the file whatever.conf
[20:28:09] <federated_life> doesnt matter what its called
[20:28:15] <federated_life> you set the path when mongod starts
[20:28:18] <dgarstang> federated_life: what about the options that mongos complains about?
[20:28:39] <federated_life> dgarstang: if you put wrong options then thats your fault…
[20:28:53] <emgram> federated_life: I have a set of chats with topics and want to only ever have the 20 newest topics stored. thus if there are 21 topics I want to find the least used topic and delete every chat with that topic.
[20:28:53] <dgarstang> federated_life: if there's options that mongos doesn't like, seems to make sense there'd be more than one config file
[20:28:54] <federated_life> but they all go in the one config file…unless you add them to the startup string
[20:29:02] <xofer_> federated_life: i have a lot of "repl writer worker 1" messages in the log
[20:29:03] <federated_life> dgarstang: there is not more than ONE
[20:29:07] <xofer_> should there be more workers?
[20:29:11] <federated_life> I am telling you but you do whatever
[20:29:18] <emgram> federated_life: I have it working with javascript but thought there might be a pure mongo way to run it
[20:29:24] <dgarstang> federated_life: ok, any idea why mongo doesn't like smallfiles or nojourmal then?
[20:29:32] <federated_life> smallfiles=true
[20:29:49] <emgram> 20 topics, which is likely <1000 chats isn't a huge number to process in JS
[20:29:55] <dgarstang> federated_life: setting smallfiles in the config file parsed by mongos gives me an error
[20:30:38] <federated_life> dgarstang: http://docs.mongodb.org/manual/administration/configuration/
[20:30:59] <federated_life> see how you need the '=' for some options
[20:31:25] <federated_life> xofer_: whats your log level?
[20:31:51] <dgarstang> federated_life: I had to remove smallfiles and nojournal or else mongos would not start
[20:32:05] <federated_life> dgarstang: pastebin your config file
[20:32:14] <dgarstang> there's also no mention of 'smallfiles' at http://docs.mongodb.org/manual/administration/configuration/
[20:33:17] <federated_life> dgarstang: dude, its open source, you can always grep the code
[20:33:29] <dgarstang> federated_life: Well, I removed those to get it to work. now I got new issues, but the whole config file thing.
[20:33:32] <federated_life> pastebin your conf and Ill let you know whats wrong
[20:33:50] <dgarstang> federated_life: ok. hang on
[20:34:02] <xofer_> federated_life: was 0, now 3
[20:34:41] <federated_life> xofer_: yea, so with that extra verbosity, you see stuff … like, you can see the journal files getting created and removed, etc etc the repl write workers are, I think, the batch replication applies
[20:34:54] <federated_life> you can always google it or read the comments in the code :)
[20:35:08] <xofer_> hmm
[20:36:06] <dgarstang> federated_life: http://pastebin.com/sunY6SiS
[20:37:48] <federated_life> dgarstang: capital F
[20:39:18] <dgarstang> federated_life: Hm. what about nojournal?
[20:40:04] <federated_life> nojournal = true
[20:41:40] <federated_life> dgarstang: Id recommend, from experience, not to disable the journal, but instead, to put it on a second raid volume [ ie ssd's ] …since Im guessing you want to reduce IO
[20:41:50] <federated_life> dgarstang: here's the funny part though…smallFiles are for journaling
[20:42:00] <federated_life> so…if you disable journaling, the small files option is bogus anyhow
[20:44:21] <dgarstang> federated_life: How about now? http://pastebin.com/gkSJKacg has 'nojournal = false' but it doesn't like that
[20:45:05] <federated_life> dgarstang: dude , its realyl simple
[20:45:09] <federated_life> you must be messing with me
[20:45:17] <federated_life> I wrote 'true' and you put 'false'
[20:45:34] <dgarstang> federated_life: why would true or false make it report nojournal is invalid?
[20:45:57] <dgarstang> federated_life: you also suggested later I don't disable journing, hence nojournal = false
[20:46:07] <federated_life> dgarstang: Ive only ever sued 'true' values, never false
[20:46:18] <federated_life> journaling is enabled by default in 2.0+
[20:46:40] <dgarstang> this would also suggest https://github.com/edelight/chef-mongodb/issues/289 that mongos doesn't like nojournal'
[20:46:48] <dgarstang> but I'll try
[20:47:04] <federated_life> mongoS does not have a jounral
[20:47:25] <federated_life> conceptually, mongoS only routs traffic…you would only need a journal if your worried about data consistency…not in the case of routing
[20:47:28] <dgarstang> federated_life: that's sorta what I've been asking...
[20:47:39] <dgarstang> so if there's one config file... how will mongos work?
[20:48:02] <federated_life> dgarstang: before , you were writing 'mongo' without the 'S"
[20:48:16] <dgarstang> pretty sure I mentioned mongos several times
[20:48:26] <federated_life> dgarstang: got stuff to do bro, sorry
[20:48:36] <dgarstang> federated_life: too hard basket?
[20:49:03] <dgarstang> i thought it was simple. :(
[20:49:12] <federated_life> try using tokumx 1.0 rc0 in a sharded cluster and then ask me
[20:49:32] <dgarstang> federated_life: i'd rather start with the simple stuff
[20:50:13] <federated_life> dgarstang: you need to learn whats the different processes…mongoD vs mongoS
[20:50:21] <federated_life> and basic architecture of a cluster
[20:50:42] <federated_life> dgarstang: probably, you should just use mongoctl ..its python scripts from mongoLabs…super easy and fast for mongo
[20:50:58] <federated_life> http://mongolab.org/mongoctl/
[20:51:22] <emgram> how do I get the result of an aggregate sorted?
[20:51:27] <emgram> here's my attempt http://pastebin.com/jw6KFh2V
[20:52:07] <emgram> I'd like to delete everything not within the top 10 "latest" numbers
[20:52:11] <dgarstang> oh come on. now mongod is reporting "Error parsing INI config file: unknown option configdb" when starting... pointing to that ONE config file that they allegedly all use
[20:52:30] <dgarstang> federated_life: need chef
[20:53:13] <federated_life> dgarstang: your trying to start multiple processes on one machine?
[20:53:18] <federated_life> for different functions?
[20:53:32] <federated_life> if so..each process needs its own config file…duh
[20:53:44] <kongthap> if the first added user has userAdminAnyDatabase, this user has full power to the database right?
[20:53:45] <federated_life> otherwise, youd need something like mysqlmulti
[20:53:56] <kongthap> if the first added user has userAdminAnyDatabase, this user has full power to the database right?
[20:54:00] <dgarstang> federated_life: trying to set up sharded cluster router. I thought it ran mongos and mongod.
[20:54:11] <kongthap> sorry i posted twice
[20:54:37] <federated_life> dgarstang: probably you;ll benefit form going to an office hours / meetup
[20:55:44] <dgarstang> federated_life: just basing it on this cookbook, which seems to start mongod and mongos on the router.
[20:56:00] <dgarstang> lots of people have used/forked it, so it's gotta be pretty close you'd think
[20:58:14] <dgarstang> certainly seems to want to connect to something local "WARN: Could not connect to database: 'localhost:27017', reason Failed to connect to a master node at localhost:27017"
[21:25:14] <abonilla> Hi, where is the documentation for mongodb.conf so that I can start a DB and its set already for replication?
[21:27:09] <emgram> how do I use these results to delete all chats that match any of these _ids? http://pastebin.com/LShsjN5J
[21:28:09] <emgram> it is an aggregate result that returns a list of topics that are candidate for deletion
[21:31:11] <dgarstang> Confused. I have 3 config servers and a running mongos. how do I connect to mongos since mongod isnt running on that box?
[21:40:48] <dgarstang> ah more errors "mongos specified a different config database string"
[22:09:22] <yutong> anyone have a recommended guide on how to manually have mongo start on bootup?
[22:14:40] <dgarstang> Weeee. What does "could not verify config servers were active and reachable before write" mean?
[22:14:47] <ekristen> yutong: depends on your OS
[22:14:54] <yutong> well ubuntu
[22:14:59] <yutong> do i just add it to init.d or something
[22:15:08] <ekristen> echo “manual” > /etc/init/mongodb.override
[22:15:12] <ekristen> should do the trick
[22:19:06] <jimpop> any idea on why FTS for "red white -blue" would take 3 times longer than for "red white"? It seems that negating a few results should be fairly easy.
[22:19:18] <ekristen> yutong: ^^^
[22:19:30] <yutong> hm
[22:19:36] <yutong> well my mongodb isn't installed via ap-tget
[22:19:38] <yutong> apt*Get
[22:20:14] <dgarstang> How about ""couldn't find database [admin] in config db"" ?
[22:20:55] <ekristen> yutong: how’d you install?
[22:21:01] <yutong> scons build from source
[22:21:04] <yutong> since I needed SSL
[22:21:21] <ekristen> do you have a mongodb.conf in /etc/init
[22:21:28] <ekristen> or a mongodb in /etc/init.d
[22:21:33] <yutong> nope
[22:21:43] <ekristen> oh I misundersood
[22:21:48] <ekristen> misunderstood*
[22:21:53] <ekristen> you want to have it start on boot
[22:22:41] <ekristen> http://pastebin.com/yyQcGsgy
[22:22:43] <ekristen> yutong: something like that would work
[22:23:06] <yutong> thanks!
[22:23:31] <yutong> # start on runlevel [2345] <-- why is this commented out?
[22:25:11] <dgarstang> Ugh. "Error: nextSafe(): { $err: "error creating initial database config information :: caused by :: DBConfig save failed: { ok: 0, code: 25, errmsg: "could not verify config servers w...", code: 13396 } at src/mongo/shell/mongo.js:146"
[22:27:22] <ekristen> yutong: cause I start mine manually
[22:27:23] <ekristen> uncomment that out
[22:27:24] <ekristen> yutong:
[22:30:23] <dgarstang> man this stuff just ain't working for me
[22:30:37] <palominoz> hello there. any good ideas on how to convert an HTTP GET to a mongoose/mongo query with javascript? sorting, population, and quite advanced queries are required
[22:31:09] <dgarstang> mongos says it starts fine but then trying to connect gets me "ould not verify config servers w...""
[22:31:11] <ranman> palominoz: can you have them write the query themselves?
[22:32:17] <yutong> clusterAuthMode keyFile Use a keyfile for authentication. Accept only keyfiles.
[22:32:17] <palominoz> @ranman in some way yes. but the main use cases i would like to support are like products?manufacturer=some-mango-id&sort=-price
[22:32:22] <yutong> what the heck is a keyFile?
[22:32:26] <yutong> is it just a plaintext key?
[22:32:33] <yutong> of whatever I Want?
[22:32:58] <ranman> yutong: you might not need that if it's not a production system
[22:33:44] <palominoz> @ranman for those not trivial searches it may be a good idea to support things like products?query={ some: { mongodb: 'thing' }}
[22:33:58] <yutong> ranman, it is a production system
[22:34:10] <yutong> i dont' understand how the security model for replicas work
[22:34:21] <palominoz> @ranman but at the moment i have no idea about escaping and securing things on mongodb
[22:34:58] <ranman> palominoz: only thing you have to watch out for is $where, rest of the stuff is a non-issue
[22:35:50] <palominoz> ranman: is there any good node library you know that could help this sort of things?
[22:35:50] <ranman> palominoz: if you have the parameters I think it's pretty easy to create a method that will generate a query
[22:36:05] <ranman> palominoz: no sorry :(
[22:36:26] <ranman> palominoz: I'm pretty sure this is easy to write though. Let me take a stab at it.
[22:37:13] <palominoz> ranman: what do you intend by watching out for $where?
[22:37:29] <NaN> you guys, do you know a good plugin for Vim to execute mongo shell commands (in vim)?
[22:38:17] <ranman> palominoz: if you're passing in raw query strings you want to make sure you parse them into json and strip out $where first
[22:38:28] <ranman> palominoz: but it's unlikely you'll just pass in the wrong query
[22:38:40] <ranman> palominoz: $where executes arbitrary javascript
[22:38:40] <ranman> palominoz: and is therefore dangerous
[22:38:58] <ranman> palominoz: but that's seriously the only escaping concern, very different from SQL injection
[22:40:59] <palominoz> ranman: what about use cases that would make use of $where?
[22:41:30] <ranman> palominoz: don't allow them, too dangerous, who wants to execute arbitrary javascript anyway? if someone wants to do that a DBA or some such person should approve it.
[22:41:49] <ranman> palominoz: literally worked with 1000s of applications, infrequently does someone need where
[22:41:51] <dgarstang> What might this mean? "create ns_1_min_1 index on config db: could not verify config servers were active and reachable before write"
[22:42:14] <ranman> dgarstang: sharded cluster ... it seems like your config servers aren't reachable?
[22:43:04] <dgarstang> ranman: they are and the logs indicate they are. I just noticed that it logs "config servers and shards contacted successfully" AFTER above error
[22:43:44] <palominoz> ranman: oke. nice advice. thank you for the support man.
[22:43:48] <ranman> dgarstang: mongodb, version, #config servers, which log is outputting that
[22:44:15] <dgarstang> ranman: version 2.6.1, thats logged to /var/log/mongodb/mongodb.log on mongos node
[22:44:31] <ranman> palominoz: NP, GL
[22:44:55] <dgarstang> it's logging the error at 076 and the connection is at 102... jeez... 76 (milliseconds) to contact all config server isn't fast enough!?
[22:45:32] <ranman> nah I don't think that's it
[22:45:49] <ranman> dgarstang: is the cluster behaving as it should?
[22:47:33] <dgarstang> ranman: here's the log... http://pastebin.com/RBF7gFFM this is a new install... so i've only gotten as far as the config servers and the routing server so far
[22:51:18] <dgarstang> Ugh. Searches for the error "could not verify config servers were active and reachable before write" are getting me C++ code. That's never a good sign
[22:51:32] <ranman> yeah I've never seen this before
[22:51:36] <ranman> but give me a sec
[22:51:38] <ranman> could be benign
[22:51:49] <ranman> trying to repro
[22:51:49] <dgarstang> ok... but I can't connect with the shell either
[22:52:16] <ranman> what connection string do you pass into the mongos?
[22:52:16] <dgarstang> thanks. running the shell gets me "2014-05-19T22:51:04.651+0000 Error: nextSafe(): { $err: "error creating initial database config information :: caused by :: DBConfig save failed: { ok: 0, code: 25, errmsg: "could not verify config servers w...", code: 13396 } at src/mongo/shell/mongo.js:146"
[22:52:44] <ranman> dgarstang: also sorry to do this but I have to run to a meeting, I'll check back again later tonight
[22:53:00] <dgarstang> ranman: kk... how many hours before I should pop back in? :)
[22:53:07] <dgarstang> roughly...
[22:53:16] <dgarstang> 4pm here
[22:55:14] <dgarstang> This is in Amazon too, but I doubt that's an issue. 76 milliseconds is pretty fast to give up
[23:05:12] <federated_life> dgarstang: you need to add a shard before you can write any data to the clsuter
[23:05:25] <federated_life> sounds like you have your config servers and shard routers…but no where to put the data
[23:06:10] <dgarstang> federated_life: is that definitely required? was hoping, thinking, based on what I read, that I could set up the config servers first, mongos, verify that's working and then move on
[23:06:27] <dgarstang> if that's true, man those errors are misleading
[23:06:55] <federated_life> its pretty clear
[23:07:06] <federated_life> don't you understand binary? ;)
[23:09:49] <dgarstang> I'm following the oreilly book here. it walks through the sharding setup. If they haven't configured sharding yet, what's their mongo shell connecting to? one of the config servers?
[23:10:07] <federated_life> dgarstang: conceptually, you have your config servers that hold meta data, then you have your routers [ mongoS ] that router requests adn keep the config data in memory…then you have your mongoD's that are replica sets and comprise a shard. so, each replcia set is a shard
[23:10:23] <dgarstang> federated_life: i got that far.
[23:10:32] <federated_life> dgarstang: perhaps you did not specify the config server URI when starting the mongoS?
[23:10:45] <dgarstang> federated_life: I did
[23:11:10] <dgarstang> it sure looks like in the oreilly book like they are connecting to mongos with mongod before they've configured sharding
[23:11:35] <dgarstang> ie because they tell you to run the sh.addShard() command. gotta have a shell to run it on
[23:11:46] <federated_life> dgarstang: you have to understand, you cannot connect to mongoS with mongoD
[23:11:53] <federated_life> mongoD is a daemon database process
[23:11:58] <federated_life> mongoS is the shard router
[23:12:01] <federated_life> mongo is the client
[23:12:19] <federated_life> when you start up your mongoS process, you need to specify the config servers , configdb=host1:27019,host2:27019,host3:27019
[23:12:23] <dgarstang> sorry for the bad venacular. i used 'mongo'
[23:12:29] <dgarstang> federated_life: I am
[23:12:42] <federated_life> so what does your error log say when you start up mognoS?
[23:12:52] <dgarstang> ugh now I gotta find the pastebin again
[23:13:10] <federated_life> dgarstang: really dude, you should follow the steps on 10gen's website
[23:13:18] <federated_life> its really simple
[23:13:41] <dgarstang> federated_life: here's the mongos log output http://pastebin.com/gkSJKacg
[23:13:57] <dgarstang> no scrap that. that's old
[23:14:08] <dgarstang> crap
[23:15:43] <federated_life> take your time, Ill send the bill later ;)
[23:15:53] <dgarstang> not seeing any specific instructions on sharding at http://docs.mongodb.org/manual/core/sharding-introduction/
[23:18:06] <dgarstang> http://pastebin.com/9trKR7g4 ... so if the reason it's reporting "could not verify config servers were active and reachable before write" BEFORE it even tries to connect to the config servers is due to sharding not being configured, then ok.
[23:19:51] <dgarstang> The guide at http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/ CLEARLY says to set up the config servers first followed by mongos. Nothing about setting up sharding yet.
[23:21:00] <dgarstang> federated_life: ^
[23:23:09] <dgarstang> The guide at http://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/ CLEARLY says to set up the config servers first followed by mongos. In their example, once the config servers have been set up, you should be able to use 'mongo' to connect to the local 'mongos'.
[23:35:21] <federated_life> dgarstang: yea..make sure your bindip is set
[23:35:26] <federated_life> for your mongoS
[23:35:38] <federated_life> or try connecting on the IP address that the process is listening on
[23:37:51] <dgarstang> Don't think thats necessary. Starting mongos process returns "Starting mongos: warning: bind_ip of 0.0.0.0 is unnecessary; listens on all ips by default"
[23:38:45] <dgarstang> Same deal tho. http://pastebin.com/Cd9RufWn
[23:39:16] <federated_life> dgarstang: it says your config servers are mesed up
[23:39:24] <dgarstang> ^ it's connecting. There's obviously a process listening.
[23:39:41] <dgarstang> federated_life: it seems to be logging that they are ok
[23:39:44] <federated_life> dgarstang: your mongoS is running…but your config servers are messed up
[23:39:56] <federated_life> pastebin your mongoS log
[23:40:03] <dgarstang> again, ok
[23:40:04] <dwstevens> dgarstang: can you confirm that you can connect to your config servers from that host?
[23:40:19] <dgarstang> dwstevens: one sec, log first
[23:41:17] <dgarstang> federated_life: mongos log http://pastebin.com/DSBZJKy4
[23:42:27] <dwstevens> it really seems like your configdb's are not working correctly
[23:42:28] <dgarstang> dwstevens: I can telnet to all three config servers on port 27017
[23:42:54] <dgarstang> dwstevens: seems so. I suppose I could blow away and redeploy
[23:43:28] <dwstevens> do you have the logs from the configdbs?
[23:43:38] <dgarstang> dwstevens: Sure. hang on
[23:45:03] <dgarstang> dwstevens: i restarted mongod on the first config server... got this... http://pastebin.com/Hg33ZUbv
[23:45:17] <dgarstang> 10.47.145.82 is the mongos router
[23:45:46] <dgarstang> not seeing any errors.
[23:47:08] <federated_life> dgarstang: run this , netstat -an | grep '\*'
[23:47:12] <federated_life> and pastebin the result
[23:48:37] <dgarstang> wtf? I stopped and restarted all three config servers, which I had done at least ocne already, and restaretd mongos router and now I can connect with mongo. Ugh!
[23:48:56] <dgarstang> seems... fragile
[23:49:16] <dwstevens> nah, probably just making a lot of changes, and simple little mistakes get in the way
[23:49:52] <dgarstang> dwstevens: Thanks for your help. I'l probably be back in 30min. :) federated_life, ditto
[23:50:02] <dwstevens> if you look back in your logs on the configdbs before the restart you may have some answers to what was going on