PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 15th of July, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:25:02] <jonjon> I'm really new to NoSQL. Can I get some quick opinion on this? http://pastebin.com/crGF5CV5
[05:39:08] <racarter> hello, my java mongodb client is using this connection string: mongodb://127.0.0.1:20000/mydb, I have a fresh mongod server running on that port but when I start my client I get this message: couldn't connect to [localhost/127.0.0.1:27017] bc:java.net.ConnectException: Connection refused
[05:39:13] <racarter> is my connection string malformed?
[06:50:30] <imsc> racarter: on which platform you have installed?
[06:51:46] <racarter> imsc: mac. I actually figured it out. the connection string had an extra slash. it had to be mongodb:/127.0.0.1:2000/mydb instead of mongodb://127.0.0.1:2000/mydb
[06:53:06] <imsc> racarter: good
[06:53:26] <racarter> imsc: thanks
[07:34:27] <carl_> hi everyone
[07:34:41] <carl_> can I get some help, I'm having a problem with my dbpath in ubuntu (new to ubuntu)
[07:34:52] <carl_> exit
[07:37:59] <das3in> hi everyone, i was wondering if I could get some help with my dbpath on ubuntu
[07:38:32] <das3in> I've tried editing the /etc/mongodb.conf file, but when I run 'mongod' it still says that it can't find it
[07:40:28] <[AD]Turbo> hi there
[08:14:17] <tadasZ> Good morning everyone! strange thing happened today - I was doing some stuff with website I'm building, and from time to time I check my collections using RockMongo (or console), I inserted some records and tried to check them in RockMongo and they are not there, then tried using console and they are still not there, but my website is returning saved documents as if nothing happened
[08:14:47] <tadasZ> and saving new ones without any problems
[08:15:12] <tadasZ> but when i do db.mycollection.find(); it returns empty results
[08:16:19] <Nodex> did you select the DB first?
[08:16:28] <Nodex> "use <db_name>"
[08:17:08] <tadasZ> sure
[08:17:47] <Nodex> sure you did or sure you're going to try now ?
[08:18:29] <tadasZ> I tried to view my collections using RockMongo and console, both shows that they are empty, but my php find query returns data
[08:18:44] <tadasZ> 140% sure I did
[08:19:08] <ron> then there's still a chance you didn't.
[08:19:19] <Nodex> then your php is either seelcting a different database or you are on the console
[08:20:03] <Nodex> check the "test" db on the console - that's where default NON selected db inserts end up iirc
[08:22:05] <tadasZ> :D I was inserting documents in demo server, and checking my collections on localhost
[08:22:15] <tadasZ> sorry
[08:22:37] <Nodex> lol
[08:22:49] <Nodex> happens to everyone from time to time!
[08:24:27] <tadasZ> oh boy 30 mins wasted because alt+tab'ed to wrong browser
[09:15:28] <broth> hello guys
[09:15:42] <broth> i have a simple maybe stupid question
[09:15:48] <Nodex> lol
[09:15:52] <broth> gh
[09:16:27] <broth> can i use the same arbiter for 2 different replica set?
[09:16:45] <broth> same arbiter with only one instance
[09:20:20] <rspijker> pretty sure that you can't
[09:20:46] <rspijker> what's the problem with running two instance on the same machine?
[09:21:17] <broth> nothing problems :D
[09:21:44] <broth> but i asked to be sure
[09:22:14] <rspijker> a mongos instance can only be part of 1 replica set. An arbiter is a mongos instance, so...
[09:22:34] <broth> that the right way is to run 2 instances
[09:22:47] <statu> Hi all!
[09:23:06] <rspijker> broth: yes :)
[09:23:16] <broth> thanks :)
[09:23:21] <statu> Anyone knows why I get a "VersionError: Document not found" when I save a document?
[09:24:29] <rspijker> "save a document"?
[09:26:27] <rspijker> http://stackoverflow.com/questions/17499089/versionerror-no-matching-document-found-error-on-node-js-mongoose
[09:26:39] <rspijker> is that your issue statu?
[09:27:02] <statu> rspijker, yes, I am writting a pastebin example :P
[09:29:34] <statu> rspijker, http://pastebin.com/kXQEeEq0
[09:30:59] <statu> rspijker, It is supossed that the error occurs when you modify an array by index, but I get it using the push method :S
[09:35:09] <rspijker> statu: this looks mongoose specific, not mongodb
[09:35:23] <rspijker> I think there is also #mongoose or #mongoosejs
[09:35:55] <rspijker> You'll probably have a better chance there
[09:38:29] <statu> rspijker, thanks :)
[09:40:24] <rspijker> np
[10:15:26] <[AD]Turbo> I've got this error when connecting to a mongodb 2.4.5 server
[10:15:26] <[AD]Turbo> Mon Jul 15 12:07:17.083 [conn183] authenticate db: mytestdb { authenticate: 1, user: "myadmin", nonce: "75a2390bdbe1c475", key: "61d19bc9d2ed17e2f628b6921721b0c4" }
[10:15:26] <[AD]Turbo> Mon Jul 15 12:07:17.083 [conn183] warning: No such role, "userAdminAnyDatabase", in database mytestdb. No privileges will be acquired from this role
[10:15:26] <[AD]Turbo> Mon Jul 15 12:07:17.085 [conn183] command denied: { listDatabases: 1 }
[10:15:26] <[AD]Turbo> Mon Jul 15 12:07:17.093 [conn183] assertion 16550 not authorized for query on mytestdb.system.namespaces ns:mytestdb.system.namespaces query:{}
[10:15:27] <[AD]Turbo> but in that database there is myadmin user with userAdminAnyDatabase privileges:
[10:15:27] <[AD]Turbo> db.system.users.find()
[10:15:28] <[AD]Turbo> { "_id" : ObjectId("51d6c57657623078d952997b"), "user" : "myadmin", "pwd" : "d6d397eb49b89a807f7a640e9665d2b0", "roles" : [ "userAdminAnyDatabase" ] }
[10:15:33] <[AD]Turbo> any idea?
[10:22:49] <rspijker> [AD]Turbo: don't you need to authenticate agains the admin database?
[10:24:56] <[AD]Turbo> no, I have my testdb
[10:25:13] <[AD]Turbo> I have inserted admin and users in that db
[10:25:40] <[AD]Turbo> the strange fact is that I can log into the database via the mongo shell
[10:25:53] <[AD]Turbo> but not from an external admin tool
[10:25:54] <rspijker> is that on localhost?
[10:26:45] <[AD]Turbo> mongo shell (that runs ok), is executed on the same machine where mongod runs, external admin tool in on another machine (inside the same LAN)
[10:27:28] <rspijker> well, that makes sense. That's the localhost exception
[10:27:42] <[AD]Turbo> yes
[10:28:12] <rspijker> I'm pretty sure you need to add an admin user to the admin database
[10:28:14] <[AD]Turbo> but I can't understand why I cannot login from another machine
[10:28:22] <[AD]Turbo> ah ok
[10:28:55] <rspijker> http://docs.mongodb.org/manual/tutorial/enable-authentication/
[10:29:37] <rspijker> just follow the steps there (add admin user, then use that to add db users)
[10:29:48] <[AD]Turbo> I'll have a look, many thanks
[10:29:52] <rspijker> np
[10:44:08] <richthegeek> I'm having some trouble with data not being found in some test units... not sure what's going on
[10:45:01] <richthegeek> using the Node driver, with write concern set to 1, after inserted the data {_id: 1, name: "bob"}, a separate connection then queries for {_id: 1} and finds nothing. It then queries again (some condition) and finds the inserted row.
[10:45:33] <richthegeek> any idea what is going on?
[10:47:55] <Derick> with a replicaset?
[10:48:05] <richthegeek> nope, no replica or anything
[10:48:31] <richthegeek> this is with essentially the default configuration for mongo 2.4.5
[10:49:28] <Derick> mhmm, sounds odd, I don't know how your code works of course, but do remember that node does lots of things asynchronously
[10:49:52] <richthegeek> yeah, this is using the async library to perform a series of steps in the test
[10:49:58] <richthegeek> one of the steps being "insert data"
[10:50:05] <richthegeek> which doesnt move on until that's done
[10:50:22] <richthegeek> inside the "insert data" step, there's a serial async array - one callback for each row inserted
[10:51:58] <richthegeek> oh wait i've figured it out... the data was a "join" operation so it was inserting stuff from the "left" side before the "right" side, and then finding rows in the right before they were properly inserted
[10:52:13] <richthegeek> the async callback being diverted because the processor is a daemon op
[10:53:32] <richthegeek> damn race conditions...
[10:54:10] <Nodex> yeh I agree about remonvv
[10:54:16] <Nodex> oops, didn't see him enter
[10:56:39] <Derick> richthegeek: :-)
[10:56:51] <remonvv> \o
[11:06:13] <remonvv> Derick, is there any way to reopen closed issues in JIRA?
[11:08:44] <Nodex> with a blmb
[11:08:46] <Nodex> bomb *
[11:09:13] <remonvv> Thing is you can't make new ones that refer to old ones it seems either.
[11:13:32] <Derick> remonvv: hmm, I can reopen anything ;-)
[11:18:22] <remonvv2> Derick, okay. https://jira.mongodb.org/browse/SERVER-6749 is reoccuring for us and caused a rather spectacular failure.
[11:18:56] <remonvv> oops, wrong client
[11:19:54] <remonvv> Derick, actually I think I'll open one in community private so I can add logs
[11:21:05] <remonvv> Are you aware of any issues being fixed with mongos metadata consistency in 2.2/2.4? This is 2.0.8
[11:22:30] <Derick> remonvv: no, sorry
[11:22:38] <Derick> (that doesn't mean it didn't happen)
[11:23:45] <remonvv> I understand ;) We basically invoked 9 addShard commands and out of nowhere it started complaining about the error mentioned in that ticket, again.
[11:24:23] <remonvv> Oh well. I'll collect all logs and such and open a new issue.
[11:27:41] <Derick> i can reopen it for you
[11:27:59] <Derick> done
[11:28:03] <Derick> but you're in 2.0 still?
[11:32:05] <remonvv> That cluster is.
[11:32:07] <remonvv> Or was :s
[11:32:33] <remonvv> We've upgraded it this weekend but we had a complete cluster failure due to this.
[11:33:07] <Derick> oops
[11:33:15] <remonvv> Indeed.
[11:33:36] <remonvv> And now I have to find a tool to combine 25 mongos logs into one time sorted one ;)
[11:34:41] <remonvv> The worrying thing is that I can't find any 2.2-2.4 issues in JIRA that would have addressed this error.
[11:35:10] <Derick> lots of stuff has been reworked though
[11:35:25] <remonvv> I live in hope ;)
[11:36:06] <remonvv> We're scale up and down constantly so it's likely we run into this sort of thing more often if it is still an issue.
[11:36:22] <remonvv> Every TV show we run we go from 1-2 shards to upwards of 20 for some shows.
[12:47:54] <greenmang0> hello all, we just cleaned a lot of data from our production mongo replica set. we were expecting it would free up a huge disk space, but it didn't, after looking up mongodocs we realized that we can't clean it up - http://docs.mongodb.org/manual/reference/method/db.repairDatabase/#db.repairDatabase , so my question is if there any way we can free up the disk space"
[12:50:47] <Derick> you can run repair
[12:52:05] <Derick> there is also compact...
[12:52:16] <Derick> http://docs.mongodb.org/manual/reference/command/compact/
[12:52:48] <Derick> greenmang0: i think compact is what you want
[12:54:41] <Infin1ty> the best way is to resync, it seems faster then repair
[12:55:22] <Infin1ty> if you go the repairpath and you don't have enough freespace, you can mount an nfs mount or additional storage and use repairpath so it'll store temp files there
[12:55:24] <Derick> yeah, but he wants to compact, not repair ;-)
[12:55:44] <Infin1ty> Derick, is compact faster?
[12:55:52] <Derick> it does something else
[12:56:43] <Infin1ty> I am running this procedure now over 7 shards, got tons of freespace to claim :), i just resync, seems better
[12:57:59] <h0bbit> Derick, Infin1ty the compact docs say that it should be run on a collection. what if I've dropped a large collection and want to reclaim the space?
[12:58:14] <Infin1ty> "compact requires a small amount of additional disk space while running but unlike repairDatabase it does not free space on the file system."
[12:58:36] <Infin1ty> seems like either repair or resync, i prefer the resync path
[12:59:04] <h0bbit> Infin1ty, I think I agree with you on this. resync seems safer.
[13:02:36] <remonvv> Derick, do you know what the process is to report an issue that involves logs that cannot be public?
[13:02:47] <remonvv> "Community Private" does not seem to allow opening normal tickets
[13:02:53] <Derick> remonvv: hmm
[13:03:02] <Derick> remonvv: besides a support contract you mean? ;-)
[13:03:17] <Derick> h0bbit, Infin1ty : sorry, got this all wrong
[13:03:29] <Derick> remonvv: I dunno, let me ask
[13:03:48] <remonvv> Derick, this isn't support. And I've noticed someone referring to Community Private as a way to do so for public issues (e.g. report issue public, but keep logs private)
[13:03:53] <Foxandxss> Hi, I said no to me. I am doing 10gen course and I really need a hint, just 1 ;( :P
[13:04:03] <Derick> remonvv: yeah, let me ask
[13:04:05] <remonvv> And to be honest I think it's about time 10gen starts paying me rather than the other way around ;)
[13:04:12] <Derick> hehe
[13:04:24] <Derick> remonvv: apparently, it *is* a community private ticket
[13:04:55] <remonvv> Derick: meaning? I can only open a question, tracking, improvement or task
[13:05:05] <Derick> yes, but I can create it for you and set you as reporter
[13:05:05] <remonvv> I can just open a public ticket and ask there
[13:05:08] <Derick> apparently that's how that works
[13:05:18] <remonvv> Aha ;)
[13:05:26] <Derick> what do you want as title? :-)
[13:05:31] <remonvv> So I report public first without logs and you move it and the description to private?
[13:05:36] <Derick> no
[13:05:44] <Derick> well, I guess I could do that too
[13:05:51] <remonvv> Hm "OMGOMGOMG ADDHARD BROKENZ OMG"?
[13:05:58] <Derick> :P
[13:06:00] <remonvv> Well whatever you want. I'll have to type up the description ;)
[13:06:55] <Derick> see PM
[13:07:09] <Infin1ty> the slowest thing in the resync/repair is building the index after, seems that is sort twice or something, it takes ages on large collections, it's seems as it's only single-threaded
[13:07:37] <Infin1ty> it takes me around 6 hours to replicate a large collection, then the index building process (just one index, _id) takes almost 2 days
[13:08:52] <Foxandxss> So, if limit a query to 30 returns 30 but scan 40 and no limit returns and scan 40. How to create an index that when limit to 30, just scan 30? Hints welcome! :P
[13:10:49] <Nodex> eh?
[13:26:34] <rspijker> Foxandxss: any proper index for the search should accomplish that...
[13:27:22] <greenmang0> Derick: Infin1ty - thanks, I will go with the resync
[13:29:34] <greenmang0> Infin1ty: what do you think about this approach - add one more member to replset, it will sync and will occupy less disk space, then make it a primary, then remove secodary members one by one, delete all the data on them, and add them back to replset
[14:18:07] <qrada> yo
[14:20:38] <limpc> hi guys. im having trouble getting mongo to work with php. im using the apt repository (ubuntu) to install. everything installs fine, no errors, mongod is running, but when i try to connect to the db via php, it just hangs there, and there's no connection entry in the mongo log
[14:21:16] <BurtyB> limpc, firewalled?
[14:21:24] <limpc> no, it is localhost, BurtyB
[14:23:35] <Derick> limpc: show us the php script in a pastebin
[14:24:02] <Derick> limpc: and, which version of the php driver do you have installed (check the output of phpinfo())
[14:25:28] <limpc> ah i found the problem. was trying to connect to the standard mongo port
[14:25:37] <limpc> im using non-standard ports and forgot to change that in my framework config
[14:33:49] <kevinsimper> Should you always check if a key inside a document exist because you cant be sure? Or should you do a db.col.find().foreach and ensure all docs has a least a value?
[14:34:24] <kevinsimper> Like if a made a new function which count pageviews
[14:34:54] <Nodex> you can select where $exists : false
[14:35:09] <Nodex> db.foo.find({a:{$exists:false}});
[14:35:56] <kevinsimper> That's cool
[14:36:19] <kevinsimper> What do you do when you make new functions? Do you then update the old documents?
[14:36:27] <Nodex> functions ?
[14:36:33] <kevinsimper> logic
[14:36:48] <Nodex> I dont follow sorry
[14:37:52] <kevinsimper> You have 100 old documents that you already created, they have a "title" and "content"
[14:38:39] <remonvv> $rename!
[14:38:59] <kevinsimper> Now you made a comment sections, so now when you create new documents they have "title", "content", "comments", "comments_count"
[14:39:01] <Nodex> and you want to add pageviews to the list of fields?
[14:39:29] <Nodex> well OLD documents wont have the comments nor count so why waste space adding them as keys?
[14:39:32] <kevinsimper> You now have 100 old documents, which does not have the "comments_count" field.
[14:39:56] <Nodex> well, Old *
[14:40:21] <kevinsimper> Do people A: if(comments_count != undefined OR B: db.content.find().forEach()
[14:40:22] <Nodex> mongo is not like *SQL - you don't have to define a field to be able to add it in the future
[14:40:59] <kevinsimper> No but then you have to add extra code to check if the field exist
[14:41:03] <Nodex> why ?
[14:41:22] <Nodex> mongo doesn't care if it exists when you select it or update it?
[14:41:24] <Nodex> -?
[14:41:55] <kevinsimper> If you try to use it and there is no value you will get an excerption.
[14:42:06] <Nodex> only in your driver
[14:42:15] <kevinsimper> Because you are trying to get a value which does not exist
[14:42:23] <Nodex> in your driver / ORM perhaps
[14:42:31] <kevinsimper> YEAH OF COURSE
[14:42:34] <Nodex> that's not really the concern of mongodb though
[14:42:40] <Nodex> that's a concern of your APP
[14:43:01] <kevinsimper> wtf, your stupid
[14:43:02] <kevinsimper> Do people A: if(comments_count != undefined OR B: db.content.find().forEach()
[14:43:13] <Nodex> dude calm down or you will end up with no help
[14:43:37] <Nodex> People do neither bcause it's not needed, if YOUR DRIVER requires it then put your own checks in
[14:43:48] <rspijker> kevinsimper: why not just do db.content.find({"comments_count":{$exists : true}}).forEach() ?
[14:47:03] <kevinsimper> rspljker: Mongodb is as you know schemaless, but either you update the old documents or you make a check in your code.
[14:47:49] <Derick> yes
[14:47:49] <rspijker> that depends, I think
[14:48:00] <rspijker> in this case you have a count, so you can make it 0
[14:48:00] <Nodex> if you don't know the answer already, you don't deserve to have a job that relates to databases
[14:48:21] <Nodex> if you think that foreaching every time to check for somehting is efficient then god help your boss
[14:48:37] <rspijker> in some cases you might have some field that doesn't have a clear default value
[14:49:08] <rspijker> then adding it to "old" documents with a null value seems the most logical approach
[14:49:15] <rspijker> that would still require checks in code though...
[14:49:52] <Nodex> space vs efficiency = the choice is the developers... for me efficiency is #1 priority, not the same for everyone thougbh
[14:49:54] <Nodex> though*
[14:50:41] <kevinsimper> rspijker: How do you do?
[14:51:07] <rspijker> what do you mean kevinsimper ?
[14:51:10] <kevinsimper> rspijker: That is how i am doing it now, setting a default value on old documents,
[14:51:57] <rspijker> for this example, with the count, I would add count=0.
[14:52:35] <Nodex> db.counters.update({number_of_idiots_using_mongodb:{$inc:1}});
[14:52:38] <Nodex> oops, wrong window
[14:53:18] <rspijker> space considerations are also somewhat relative… what happens when things are moved/updated, you get fragmentation, etc. Especially when the amount of old documents is relatively low I would just update them...
[14:53:51] <kevinsimper> rspijker: Okay, that was just was i was looking for :-) In MySQL I would just set the default value to zero in the schema, but i know MongoDB is different.
[14:54:00] <rspijker> it will improve the readability of your code, make things easier to manage, with (imo) very little downside
[14:54:36] <kevinsimper> So you always consider it, you dont have a rule like?
[14:54:56] <rspijker> I don't think it is a question you can answer generally
[14:55:08] <kevinsimper> So if it was 1 million documents, you would not add a unnessary zero
[14:55:31] <kevinsimper> Because you could assume it would be zero, because the value is undefined
[14:56:06] <rspijker> it depends heavily on what is important for you. If you need every byte and things aren't going to change much, do some checks in code. If space isn't as much of an issue for you and you prefer code readability, add the field...
[14:56:25] <Nodex> boooooom back to my original answer
[14:56:33] <kevinsimper> Or would you? I am just interested, because from MySQL where i am coming from it was not something i had to deal with, because it was just how it was with a schema
[14:57:38] <rspijker> for 1M docs I would gladly add a 0 in my current use cases
[14:57:38] <limpc> i cant seem to get a db.insert to work. what am i doing wrong? http://pastebin.com/Y817j7er
[14:58:22] <Derick> auth is reserved I think, try:
[14:58:27] <Derick> db['auth'].insert( ...
[14:58:28] <kevinsimper> rspijker: Thanks for your input :-) Much appreciated hearing how you are doing it with MongoDB
[14:58:35] <rspijker> sure, np
[14:58:47] <Nodex> limpc : else add via add user
[14:58:58] <Derick> Nodex: that's not system.auth
[14:59:20] <Nodex> oops, didn't see the "use users"
[14:59:24] <Nodex> my bad
[15:00:25] <lxnay> hi, i have a compilation problem with mongodb-2.4.5 on i686
[15:00:53] <lxnay> src/mongo/client/sasl_client_session.cpp: In function 'mongo::Status mongo::{anonymous}::_mongoInitializerFunction_CyrusSaslAllocatorsAndMutexes(mongo::InitializerContext*)':
[15:00:56] <lxnay> src/mongo/client/sasl_client_session.cpp:77:28: error: invalid conversion from 'void* (*)(long unsigned int)' to 'void* (*)(size_t) {aka void* (*)(unsigned int)}' [-fpermissive]
[15:01:35] <lxnay> it looks like size_t is defined as unsigned int while somebody thought it's always long unsigned int...
[15:02:45] <limpc> so why isnt it working?
[15:03:28] <limpc> oh let me try that Derick
[15:03:47] <limpc> Derander: that did notw ork.
[15:03:54] <limpc> ill try changing the collection name
[15:05:31] <limpc> hmm mongo's docs reference only 'system' and '$' as restrictions
[15:05:54] <Derick> yes
[15:06:11] <limpc> oddly enough though, changing the collection name did solve the problem
[15:06:17] <limpc> so 'auth' is an undocumented restriction?
[15:22:42] <JoeC1> If I was to use a mongo collection to store session data, after issuing an update to a record, can I reasonably assume it will be available for immediate read?
[15:23:14] <Derick> yes, but it is not guaranteed
[15:23:37] <JoeC1> Is there any way to make sure it is?
[15:23:39] <rspijker> limpc: auth is not restricted
[15:23:48] <rspijker> limpc: but there is a db.auth command
[15:23:56] <Derick> JoeC1: from the same connection that is w=1, it should work
[15:24:13] <JoeC1> Derick: Thank you
[15:24:17] <rspijker> so when you go db.auth.insert(… it will try accessing the insert element of the auth function and not of the db
[15:24:22] <Nodex> rspijker : he is not after the system auth
[15:24:33] <Nodex> he is using a different collection
[15:24:34] <rspijker> db.getCollection("auth").insert(…) should work fine
[15:24:45] <rspijker> Nodex: who says anything about system auth?
[15:24:47] <Nodex> database*
[15:25:44] <Derick> db.auth isn't the system auth Nodex ;-)
[15:25:56] <Nodex> I know
[15:27:11] <Nodex> it is however a function for it
[15:28:06] <rspijker> exactly, it's a function. You can't do function.insert(…), which is why it fails
[15:28:24] <rspijker> has nothing to do with it wrapping system auth or not
[15:30:04] <rspijker> try doing db.eval.insert(…) or db.getName.insert(…). You'll get the exact same error ;)
[15:41:26] <dllama> hi guys, i'm fairly new to mongo, pretty sure that when i started it up, i only ran mongod. i didnt' specify a local folder or anything.. I'd like to restart mongo and reload it properly, with a full path to the db folder and of course maintain all my documents, etc. whats the safest way to do that?
[15:44:28] <Striki> do you have any points why 1 mongos (out of 30-40) is the source of almost all setShardVersion requests?
[15:44:36] <rspijker> just turning off mongos, moving the files to your desired dbpath and starting mongos up again with the correct dbpath should be fine, dllama
[15:45:28] <Nodex> dllama : did you create a mongodb config / init script?
[15:45:43] <dllama> I dont believe that i did
[15:45:54] <Nodex> did you install it as a package?
[15:45:56] <dllama> if i'm not mistaken, i think i may have just done apt-get install and then just ran it
[15:46:09] <rspijker> dllama: if you want no downtime, you can start a second mongod (with correct dbpath), configure it as secondary, wait till it syncs, then step down the original master
[15:46:13] <Nodex> right, you will have a mongodb.conf... probably in /etc/
[15:46:42] <Nodex> edit that and it will show you the defaults it uses, change what you need, save it and do `/etc/init.d/mongodb restart`
[15:47:17] <rspijker> how will that migrate his current documents exactly?
[15:47:39] <dllama> I dumped them to a local folder,
[15:47:44] <dllama> with mongodump
[15:47:52] <Nodex> dumped what?
[15:47:55] <dllama> the db
[15:48:07] <Nodex> you can just copy the old ones if you need to
[15:49:28] <dllama> from the running daemon, I dumped them to a folder in my home directory. mongo powers a cms that i'm using and my initial upgrade of the CMS failed to migrate the DB properly, forcing me to essentailly rewrite the entire site (mostly copy/paste but was still a tedious process). so i'm actually pretty nervous about this procedure
[15:49:37] <dllama> would absolutely hate to go through that again
[15:49:59] <Nodex> you wont have lost your data. Changing values in the config does not erase the db's
[15:50:34] <Striki> do you know how I can find out what servers owns a certain serverId? the serverId is an ObjectId
[15:50:51] <Striki> it's given in setShardVersion
[15:50:55] <Striki> the serverID
[15:52:19] <Nodex> http://docs.mongodb.org/manual/core/sharded-cluster-metadata/
[15:52:33] <Nodex> http://docs.mongodb.org/manual/reference/config-database/#config.shards
[15:53:43] <Striki> yeah, I was looking through it, but couldn't find a serverId as an ObjectId
[15:54:11] <Nodex> have a look in systems.indexes
[15:56:11] <Nodex> very strange how it's been assigned an ObjectId
[15:58:17] <Striki> the request looks like this in the mongodb admin web interface
[15:58:21] <Striki> { setShardVersion: "", init: true, configdb: "config1:27019,config2:27019,config3:27019", serverID: ObjectId('51bec2ff9e5fe2d77437b56d'), authoritative: true, $auth: {} }
[15:58:31] <Striki> that serverID, I was wondering how I can dig up what it refers to
[15:58:46] <Striki> since that mongos, that is issuing this request, is doing it all too often, I suppose
[16:37:03] <dllama> Thanks for the help with my previous question guys! i have 1 more :)
[16:37:11] <dllama> what exactly is a "bson_dump" ?
[16:37:54] <Nodex> it's analogous to a mysql dump
[16:38:28] <Nodex> bson = binary json - 10gen's kinda db backend language I suppose
[16:38:37] <dllama> hmm, if my rails app is showing that as an error, would that a configuration error with the db or something really specific to the app?
[16:38:40] <Nodex> http://bsonspec.org/
[16:38:52] <dllama> i'm forced to use a cms here and its been giving me those errors all morning and i can't really seem to narrow down why
[16:38:53] <Nodex> depends what the error is
[16:39:04] <dllama> undefined method `__bson_dump__' for Thu, 01 Jan 1970:Date
[16:39:17] <Nodex> what is it getting fed to come up with that ?
[16:39:27] <dllama> a submission form
[16:39:40] <dllama> can that be a failed validation?
[16:40:14] <Nodex> form validation?
[16:41:21] <dllama> I can gist the entire error/trace
[16:41:54] <dllama> i just dont even know where to turn with this, either #rubyonrails or here, since the developers irc room has 3 people in it which never seem to answer :/
[16:42:38] <Nodex> the only time you should even see those words (bson dump) is when backing up or restoring a mongodb
[16:43:19] <dllama> crap
[16:43:36] <dllama> i'm doing neither, just trying to submit a form :/
[16:44:16] <Nodex> perhaps the form is populated via the DB and it's putting the information there?
[16:46:51] <dllama> it shouldn't
[16:50:37] <dllama> it doesn't make sense that a form would be populated prior to inserting
[16:50:53] <dllama> well actually in the logs i see the insert statement
[16:51:02] <dllama> and right away the error
[16:51:28] <Nodex> can you pastebin it ?
[16:52:12] <dllama> sure, i'll gist full log including the insert
[16:54:46] <dllama> https://gist.github.com/mvoloz/1dfea624056abe8d7335
[16:54:56] <dllama> i split it into 2 files,
[16:58:11] <dllama> the insert_statements actually show up before the error in the log
[16:58:18] <dllama> gist just reordered them
[16:59:14] <limpc> hmm have another problem, when calling Session::instance(); i get an ' Error reading session data.' error from kohana
[16:59:38] <limpc> no errors in logfile
[17:02:20] <limpc> any ideas?
[17:03:26] <limpc> hmm when i print_r $e (from inside kohana's session class), it says the session was already started.
[17:03:30] <Nodex> dllama : I don't know ror enough to help you, if it was mongo or in the mongo log rather it would be better
[17:03:55] <dllama> i'm looking through the mongo log now to see if theres anythign helpful there
[17:03:58] <Nodex> limpc: that sounds like you're frameworks error
[17:04:09] <dllama> from what i've read, that has something to do with MOPED (not sure what that is even)
[17:04:26] <Nodex> sounds like some templating crap
[17:05:20] <limpc> Nodex: ah i found the issue. thanks
[17:05:51] <Nodex> limpc : a piece of advice, redis is much faster at session management than mongo
[17:06:23] <limpc> yeah so is memcache. but they're using mongo/mysql, so i gotta go with the flow
[17:06:40] <Nodex> on high inserts mongo sometimes chokes and locks things causing timewaits
[17:06:43] <dllama> Nodex, is there a way to enable more indepth logging? i'm not really getting much from mongodb.log
[17:06:52] <limpc> Nodex: not if you have enough shards.
[17:07:09] <Nodex> limpc : that doesn't matter, eventually the shards will lokc
[17:07:11] <Nodex> lock*
[17:07:26] <limpc> Nodex: do you have a whitepaper or doc somewhere outlining that issue?
[17:07:28] <Nodex> and you shouldn't need to scale horizintally to cover sessions
[17:07:41] <limpc> well an insert is a server-level lock.
[17:07:51] <Nodex> no it's collection level atm
[17:07:51] <limpc> so you HAVE to scale to achieve concurrency
[17:08:12] <limpc> ok well that doesnt exactly solve the issue ;)
[17:08:36] <Nodex> tbh your bottleneck won't be mongodb anyway so it prbably doens't apply to you
[17:08:37] <limpc> but collection is definitely an improvement
[17:10:16] <Nodex> dllama : I don't know what to suggest, perhaps ask the creator of the CMS?
[17:10:32] <limpc> Nodex: hmm, mongo docs say locks are currently database, not collection
[17:10:58] <dllama> Thanks Nodex, seems like thats my best bet @ this point, just wish they actually were available in the irc room they advertise… i'd be furious if this were a commercial product :/
[17:11:36] <drag> Hi there, if I query mongodb with a BSON that like: {_id=,name=Joe}, while it ignore the empty _id value or will that cause an error?
[17:11:39] <dllama> went as far as adding every name/mention from their git pages on skype, and nobody has yet accepted my ivite lol
[17:13:24] <Nodex> not my day today, I meant database LOL, time to give up helping for the day
[17:13:54] <Nodex> my point still stands, mongo will not be your bottleneck for sessions, your php framework will melt before that happens
[17:17:00] <drag> Hi there, if I query mongodb with a BSON that like: {_id=,name=Joe}, will mongo ignore the empty _id value or will that cause an error?
[17:17:23] <Nodex> on the shell or in your driver?
[17:17:29] <drag> Java driver
[17:18:05] <Nodex> pass, I would imagine it would throw a fit tbh
[17:18:35] <Nodex> the php driver throws an exception if the _id is supposed to be an ObjectId and isn't, can't imagine the java driver differing much
[17:19:12] <drag> ok, thanks
[17:37:59] <remonvv> \o
[17:45:35] <silasdavis> The following query returns a project of a document that I want:
[17:45:38] <silasdavis> db.licenses.find({ "distributions.key": "b41f7c82-756b-43c8-9435-15cff7498d15",}, {"distributions.name": 1})
[17:46:01] <remonvv> Good to know ;)
[17:46:11] <silasdavis> I would like the same projection but only including the first element of the distributions array, I've tried:
[17:46:19] <silasdavis> db.licenses.find({ "distributions.key": "b41f7c82-756b-43c8-9435-15cff7498d15",}, {"distributions.$.name": 1})
[17:46:27] <silasdavis> but it does not work - am I wrong to think that it ought to?
[17:47:25] <remonvv> Yes.
[17:47:40] <remonvv> In so much that the field filter does not work with the positional operator.
[17:48:10] <remonvv> If the distributions array can be large and your requirement is to only fetch one you may want to rethink your schema.
[17:48:17] <silasdavis> remonvv, is there a comparable projection that would work
[17:48:28] <remonvv> Meaning either do a document per distribution, or make the distribution key the field name (not recommended)
[17:48:36] <mrapple> in a map/reduce, i'm creating an array with a lot of values, with duplicates possible. should i remove duplicates in the reduce or finalize stage?
[17:49:08] <silasdavis> remonvv, ah it's not really a size issue, it's just for convenience at the command line and I wanted to project out the elements before doing a
[17:49:11] <silasdavis> an update
[17:49:13] <remonvv> Not really. Field filtering is just not meant to do a lot more than exclude fields. It is not meant to be selective about which array elements ought to be returned.
[17:49:30] <silasdavis> would the same projection work in an update context
[17:49:36] <remonvv> silasdavis, I see. Well if it's purely shell you can do it in JS.
[17:49:43] <silasdavis> if I wanted to update the first matching name?
[17:49:54] <remonvv> Ah, well that's different.
[17:50:01] <silasdavis> remonvv, yes I'll do that now I understand the scope of find, thanks
[17:50:23] <remonvv> You can use the positional operator exactly as you are doing now in an update.
[17:51:13] <remonvv> So, for example : db.licenses.update({ "distributions.key": "b41f7c82-756b-43c8-9435-15cff7498d15",}, {$set:{"distributions.$.name": "HI!"}}) works just fine
[17:51:52] <remonvv> Also have a look at $elemMatch for a somewhat more intuitive element matching if you're matching on more than one array element field.
[17:52:05] <silasdavis> well do .. thanks
[17:52:28] <silasdavis> it would be nice if find allowed the same semantics in projections, so you can do a sanity check with a find before an update
[17:52:42] <silasdavis> will *
[17:54:37] <remonvv> silasdavis, you can, but the "fields" part of find(criteria, fields) is not meant to do that
[17:54:50] <remonvv> you can check it with JS though
[17:55:40] <remonvv> To be honest you want to test updates with updates.
[17:55:44] <remonvv> Try it on test data.
[17:56:17] <remonvv> There are a lot of subtleties you'd miss if you don't
[18:18:32] <mrapple> anyone know why i can't get any debug output for M/R finalize to show up?
[18:19:49] <mrapple> just using print() and printjson()
[18:19:56] <mrapple> it will show up in the map or reduce part, but not finalize
[19:51:20] <idlemichael> i have a mongo collection that's about ~120GB large. i'm trying to export it to another mongo instance on a different server -- every time i import data from the old server to the new server, mongo seems to re-do the entire index. is that standard? is there a way to background the _id index creation, is that safe?
[19:54:08] <awpti> Could use some advice, doing a little toy project to learn Mongo better. Is this way of storing this data going to bite me in the rear? http://pastie.org/private/lxrgkrroig8jzgfvsrbg
[20:00:04] <crudson> awpti: looks fine to me
[20:00:16] <remonvv> idlemichael, there is and it's safe but you'll have to start it prior to the import
[20:00:29] <remonvv> And obviously data access for that period will be significantly slower for that collection
[20:00:34] <idlemichael> remonvv: can i do it after the collection is created?
[20:00:48] <idlemichael> remonvv: er, after the index is created*
[20:01:08] <remonvv> Yes that's what I'm saying. Create the index, then do the import.
[20:01:21] <remonvv> You can create indexes in the background but I'm not sure if that works for _id indexes
[20:01:22] <idlemichael> alright -- thanks remonvv
[20:01:34] <remonvv> Np
[20:01:50] <remonvv> http://docs.mongodb.org/manual/tutorial/build-indexes-in-the-background/
[20:02:02] <idlemichael> ya -- i read that, but htey don't mention the id
[20:02:03] <idlemichael> _id*
[20:04:40] <raulsh> /j #ubuntu
[20:16:24] <Chammmmm> Hi, I had to drop a collection due of a bug in 2.4.2 for corrupted indexes.. After doing a db repair I have been able to drop it.. but not the collection is still showing up in the shard status.. with chunks distributed and everything..
[20:16:50] <Chammmmm> Will recreating the collection and reshard it fix it? or I actually need to hack the config database
[20:17:24] <Chammmmm> and I sure dont want to do the second option :)
[20:25:14] <aandy> hi guys, i'm in a bit of a jam. i have a fairly simple replica set of three nodes. running rs.status() shows ALL nodes as secondary (no primary). so obviously i can't reconf to change the priority, or step down. i've tried freezing two of them, but the remaining doesn't seem to attempt to become primary. do i need to set up an arbeiter or can i solve it easier?
[20:27:04] <retran> chttp://docs.mongodb.org/manual/core/replica-set-elections/
[20:29:14] <aandy> i'm afraid i'll need another hint. it's been running like this for about a month, and with heartbeats (and low latency), so elections aren't run (successfully anyway)
[20:29:21] <retran> there's a few senerios under which your nodes could refuse to elect a primary
[20:29:31] <retran> so need more hints from you
[20:31:46] <aandy> oh wait, i may have confused lastheartbeat with lastheartbeatrecv. here's my conf + status: http://pastie.org/private/ubn5lr1ytix3dmo8vji7ag
[20:34:32] <aandy> yeah that helped. turns out one of the nodes could not reach two of the others (short story: tunnel mixup). thanks, retran :)
[20:36:33] <retran> cool glad it worked. i guess your problem fell under network partition
[20:37:06] <aandy> typical assumption fail, hehe
[20:39:17] <awpti> Okay, I can't figure this out and the docs don't go into it.. how do I remove an item from a doc in a doc? http://pastie.org/8143662 (I want to get rid of created_time from login_data)
[20:44:02] <retran> awpti, serialize a new object based on existing object minus the field
[20:44:45] <retran> since you have the added copmlexity of recursive doc, you would possibly need to control for concurency in a transaction
[20:45:26] <retran> let me make sure that's the best way, one min
[20:45:54] <retran> i'm wrong
[20:46:31] <retran> just use {$unset: login_data.created_time:1}
[20:47:12] <awpti> Ohh, that's simple. Wish that was in the docs. :O
[20:47:31] <retran> i dunno it should work if it's the same notation as other similar things
[20:47:36] <retran> i havnt done it yet :(
[20:47:45] <retran> perhaps i'm egregiously wrong
[20:47:55] <awpti> Hmm, yeah. unexpected token on the .
[20:48:00] <retran> one min
[20:48:14] <awpti> There's gotta be a way to do it without recreating the object. I'm guessing I'm just missing this on the docs.
[20:48:16] <retran> put in quotes
[20:48:30] <retran> the field reference
[20:49:17] <awpti> Ran without errors, but didn't drop the field.
[20:50:57] <retran> db.col.update({/*condition*/},{$unset: { 'login_data.$.created_time':1}});
[20:52:57] <awpti> Looks like there's an open issue on this in jira.
[21:05:52] <retran> i just noticed this
[21:05:56] <retran> login_data is array
[21:07:59] <retran> a singleton :|
[21:26:10] <idlemichael> is there a way to make the _id index non-blocking?
[21:27:54] <retran> what do you mean by non-blocking
[21:28:37] <retran> do you mean process the index in the background?
[21:28:43] <idlemichael> yes
[21:28:47] <retran> i'm pretty certain the answer would be 'no'
[21:28:58] <idlemichael> you can do it for other indeces
[21:29:02] <retran> or else it would be able to ruin many things
[21:29:04] <idlemichael> but nobody mentions the _id index
[21:29:20] <retran> i dont see how it could work
[21:29:32] <idlemichael> i think it's time to try it out
[21:29:52] <retran> the _id is based on time, but if it finds there's another inserted at same time it does some other magic to it to make it different
[21:30:30] <retran> i guess that would make your inserts potentially perform table scans just to complete an insert
[21:30:40] <retran> which is kinda weird
[21:30:51] <retran> i dont think it makes any sense at all to make _id non blocking
[21:31:26] <jblack> I would think that part of _id is a hash of the data.
[21:31:28] <retran> if mongo lets you do it, i dont think it would accomplish faster inserts
[21:31:33] <retran> no it's not jblack
[21:31:51] <retran> if it was a hash of the data it would mean the user would always have to insert unique documents
[21:32:08] <retran> oh, a part... yeah i dont think it is.
[21:32:25] <jblack> Yeah, of id. for create. otherwise, you already have an id. ;)
[21:32:27] <retran> the mongodb devs would know, but i read it's most likely to be the time of insert
[21:32:42] <retran> and if collision, it does something to make it unique
[21:32:57] <retran> (in a nutshell thats how mongo _id works)
[21:33:06] <jblack> http://docs.mongodb.org/manual/reference/object-id/
[21:33:23] <jblack> time, machine, process and a counter
[21:33:35] <retran> there we go
[21:33:46] <retran> all very fast things to get
[21:33:48] <jblack> wow, that google think is pretty awesome, no?
[21:34:03] <retran> nothing about calculating hashes
[21:34:16] <jblack> Yup. I was completely off on that. =)
[21:34:50] <retran> anyway, that doesn't change that having _id be background indexed would be a weird idea
[21:35:09] <retran> since it could potentilally cause table scan (negating the speed of insert gain from background indexing)
[21:35:21] <retran> it would either have to have 'caught up' or do a table scan
[21:35:31] <idlemichael> retran: the reason i'm interested in this is -- i have a ~120GB mongo collection that I'm trying to replicate to another box. everytime i do a mongoexport --host remote.host | mongoimport it re-indexes the _id, and blocks
[21:35:36] <idlemichael> and it takes hours and hours
[21:35:37] <jblack> I don't believe it's indexed in the "background". I believe there's an app wide write lock
[21:35:43] <idlemichael> which causes the new data set to become stale
[21:35:58] <retran> he's asking if it could be set to background index
[21:36:03] <retran> like other fields
[21:36:31] <retran> hmm in mysql they have an operation for this purpose, idlemichael
[21:36:34] <idlemichael> i'm firing up a vm to try it out though
[21:36:37] <retran> turn off autoinc
[21:36:45] <retran> but autoinc != mongodb _id
[21:36:54] <idlemichael> right, retran
[21:37:13] <jblack> wait, I did hear something about this in a vodcast.
[21:37:19] <retran> mongodb _id is more 'weird' i guess it's optimized for speed
[21:37:27] <retran> speed without collision
[21:38:22] <retran> idlemichael, are you certain that mongoexport is suitable tool for replicating such a large data set
[21:38:29] <jblack> I wish I could remember the vodcast, but if I remember right, there was a way to disable indexing
[21:38:42] <retran> disabling indexing on _id you mean?
[21:38:52] <retran> i guess he's gonna find out
[21:39:06] <retran> disabling indexing or making it run in background is pretty easy
[21:39:13] <retran> for any particular field
[21:40:55] <retran> i just tested it
[21:41:48] <idlemichael> retran: mongoexport seems fine, may be it's not suitable -- i just assumed 10gen would provide suitable tools to work with their data
[21:42:15] <idlemichael> retran: mongoexport doesn't seem to trigger the reindex either, it seems to happen from within mongod
[21:43:01] <retran> have you replicated other kinds of databases when they're that large
[21:43:23] <retran> i've done so in mysql, and mysql's equivelent isn't suitable for such large data sets
[21:43:30] <idlemichael> retran: i haven't. first time :<
[21:43:34] <retran> well, it's problematic
[21:43:46] <retran> but i dont have experience in the 'higher end' DBs
[21:43:52] <retran> which may have magical tools
[21:43:57] <idlemichael> indeed retran; i tried doing the replicaset method and having a slave catch up to the master, but it never seems to catch up
[21:43:58] <Chammmmm> Question.. if I stop the balancer.. are there other ways that the config servers meta data can be modified? I want to know if I can restart the config server after a dump.. while the rest of mongo is being snapshotted
[21:44:18] <retran> yeah that sounds SO simliar to issues in mysql replication on large data
[21:44:39] <retran> i just tarballed the data directory
[21:45:07] <retran> i havn't impelmented a replication set yet in mongodb, but i have migrated a mongodb server's data via tarball of data dir
[21:45:25] <retran> that had very little drama
[21:46:29] <retran> if you have the luxery of stopping your primary during the process, even better
[21:46:46] <retran> well, you'd have to stop it actually for copying the data dir
[21:47:02] <idlemichael> retran: that's the problem. primary is production right now -- stopping it is almost out of the question, but it might come to that
[21:47:34] <retran> depends how much drama you can afford trying to use the migration tools
[21:47:54] <idlemichael> right now it's more affordable than stopping the production db ;)
[21:48:00] <retran> yeah
[21:48:30] <retran> in a month or two i get to impelment replicationi ... for now just dev
[21:48:40] <retran> i can hardly wait
[21:49:15] <idlemichael> "If you want to simply copy a database or collection from one instance to another, consider using the copydb, clone, or cloneCollection commands, which may be more suited to this task. The mongo shell provides the db.copyDatabase() method."
[21:49:19] <idlemichael> there we go retran
[21:49:24] <idlemichael> i should be using something else
[21:49:29] <idlemichael> :
[21:49:30] <idlemichael> :<
[21:50:00] <retran> hmm you mean copy the database to another db on same mongodb server?
[21:50:15] <idlemichael> diff servers
[21:51:09] <retran> oh interesting mongo lets you connect to diff servers via another
[21:51:50] <retran> that looks like they mean for it to work well if it's part of the native functionality
[21:52:23] <idlemichael> waht's weird, though, is mongoimport nad mongo export
[21:52:28] <idlemichael> are tools provided by 10gen
[21:52:32] <idlemichael> you'd think they'd work in a similar way
[21:53:02] <retran> let me know if the native clone commands work well for this
[21:53:10] <retran> i'll be doing same in a month :(
[21:53:15] <idlemichael> that'll probably be tomorrow
[21:53:24] <idlemichael> can't stop the indexing now it seems like
[21:53:35] <idlemichael> it's baout 45 minutes into stage 2 of 3
[21:53:36] <retran> you can just kill the slave right
[21:53:39] <idlemichael> 6% done
[21:53:41] <retran> since it wont be important
[21:53:45] <idlemichael> i mean i CAN
[21:53:47] <retran> just hard reboot it and kill
[21:53:56] <idlemichael> i can just kill the mongod
[21:53:58] <idlemichael> service
[21:54:00] <retran> you'll be doing it another method anyway
[21:54:00] <idlemichael> no need to reboot
[21:54:05] <retran> sure
[21:54:08] <idlemichael> true
[21:54:12] <idlemichael> might as well start experimenting
[21:54:23] <retran> then wipe yer datadir
[21:54:33] <retran> before starting back
[22:34:42] <hahuang65> Can anyone answer this? https://github.com/mongoid/mongoid/issues/3167
[22:39:26] <jblack> I believe that's by intent
[22:39:57] <jblack> If you look up the examples at mongoid.org (http://mongoid.org/en/mongoid/docs/querying.html), they're clearly passing in a type String.
[22:40:35] <jblack> perhaps if you did a find_by( id: -objectidhere-) ?
[22:45:32] <crudson> hahuang65: IDs don't have to be ObjectID. For you two examples to be compared to each other you should be doing ObjectID.from_string('abc'), as that is what should check whether it's correct.
[22:46:20] <hahuang65> crudson: but in the old version of mongoid, it would automatically do that for me. Now I have to do it myself?
[22:47:25] <hahuang65> crudson: in Mongoid 2.x, if you do a find where it expects an ID, and that's not a valid format, it would throw InvalidObjectId… now it doesn't. You're saying that I have to check that myself before I pass it to .find or any other query where I use an ObjectId?
[22:47:54] <idlemichael> retran: db.cloneCollection seems to be doing the trick -- i can query and access the db
[22:47:58] <idlemichael> while it's running
[22:48:05] <retran> awesome
[22:48:26] <idlemichael> this also copies over the index from the current running server
[22:48:29] <idlemichael> if you want it to
[22:50:38] <crudson> hahuang65: have you overridden _id's type in the model? If not, how about? field :_id, type: Moped::BSON::ObjectId
[22:51:08] <hahuang65> crudson: I haven't. Let me try that.
[22:53:19] <hahuang65> crudson: ah… looks like my _id is of type Mongoid::Fields::Standard
[22:53:24] <hahuang65> crudson: so your solution might work.
[22:53:40] <crudson> hahuang65: just remove that declaration
[22:54:07] <hahuang65> crudson: I don't think that's declared anywhere… Mongoid::Documents should handle it no?
[22:54:43] <crudson> hahuang65: I misread your last msg sorry
[22:55:56] <hahuang65> crudson: hmm yeah adding your line to the model didn't help.
[22:59:04] <hahuang65> crudson: looking at the code in mongoid, where it tries to call .mongoize(id) it's calling that on Mongoid::Fields::Standard, when I'm assuming it should be Moped::BSON::ObjectId
[23:01:18] <hahuang65> crudson: never mind, Mongoid::Fields::Standard delegates :mongoize to it's type. Lemme see where that leads.
[23:01:30] <crudson> yeah you need to examine .type on that
[23:01:53] <crudson> (running out for a bit)
[23:05:37] <kevino> does anyone know what version replSetMaintenance was added?
[23:09:50] <hahuang65> crudson: thanks for your help. I've found the cause. It seems like an oversight on their part… ping me later if you want details.
[23:15:21] <polynomial2> do I have voice?
[23:15:22] <polynomial2> 2
[23:22:40] <poet> I am attempting to call rs.initiate() from the mongo shell and I get an error "no such cmd: replSetInitiate"
[23:48:06] <tyl0r> If you stop/shutdown mongodb properly will it flush to disk immediately? ie. Can I start taking a snapshot as soon as the shutdown returns? Or should I run db.rsyncLock() before shutting down? I'm having issues using fsyncLock/Unlock with authentication so I'd prefer not to use that
[23:49:13] <retran> dont know, i've always had good luck with my datadir snapshots, and i never wait any period of time or do have a particular command to run before the shutdown
[23:49:50] <tyl0r> cool, thanks