PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 18th of March, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:20:06] <Freman> the devs think we're going to recover the 1.4tb of junk they log
[01:25:09] <daidoji> Freman: recover from what?
[01:28:07] <Freman> power outage, btrfs, take your pick
[02:05:04] <GothAlice> Right-o. My forced migration/benchmark suite (forced in that it purposefully recalculates _everything_ we can recalculate) reliably kills WiredTiger. http://cl.ly/image/1z2d3d1p1Q0O and http://cl.ly/image/1y0s2L3p1g1H
[02:05:13] <GothAlice> I'm down to one update per second.
[02:06:24] <GothAlice> All it's doing at the moment is: for doc in db.job.find({}, {_id: 1}): db.job.update({_id: doc['_id']}, {$set: {c: doc['_id'].generation_time}})
[02:07:02] <GothAlice> (Across 17.4K documents.)
[02:12:24] <GothAlice> Aaaaand mongod finally k-lined.
[02:12:27] <GothAlice> T_T
[09:45:56] <srimon> Hi all
[09:46:07] <srimon> I need a small help
[09:47:00] <srimon> In Mongo date is storing in ISODate object, I need to convert to Newyork time and to display it
[09:54:53] <spuz> srimon: your programming language should have a library that can do that for you
[09:59:46] <srimon> there is no option in mongodb
[10:00:13] <srimon> spuz: there is no option in mongodb
[10:43:18] <spuz> srimon: right, there should be no option in mongodb, it's a database after all, not an application
[10:43:30] <spuz> it's meant to store data for computers to read, not humans
[10:53:12] <pamp> hi Im using now ongo 3.0 with WT snappy. The results of my tests show that the queries are faster in version 2.6. It was not supposed to be faster in version 3.0?
[11:06:31] <srimon> spuz: thank you
[12:51:17] <Naeblis> I have an array of items [a, b, c]. I'd like to query a collection for the existence of any of them, and then return only the items which no row exists. Eg: If the collection already has a and b, return c.
[13:03:14] <Diplomat> Hey guys, I was wondering how good idea is to use mongodb for web cache (images, javascript etc)
[13:03:53] <Diplomat> I set up a test solution that serves about 500 requests per hour and so far it works pretty well
[13:04:04] <cheeser> i don't think it's encouraged to use mongo as a cache as it's not necessarily design for that kind of usage but i think i've seen it done here and there.
[13:04:17] <cheeser> it'll work for sure.
[13:04:36] <Diplomat> It's really surprising that it's working as well as Redis (I was using redis before)
[13:04:38] <cheeser> in past gigs, we've always fronted our mongod with something like couchdb
[13:05:08] <cheeser> at that rate, almost anything would work. including no cache.
[13:05:31] <StephenLynx> Diplomat a cache for generated content?
[13:05:45] <Diplomat> StephenLynx: No, just static content
[13:05:55] <StephenLynx> why don't you use code 304 then?
[13:06:14] <StephenLynx> and stream the files in the first request?
[13:07:26] <Diplomat> That's because I'm proxying those requests from multiple servers.. So I decided to set this mongodb server up next to the proxy server so it wouldn't have to stress the main server (which is fairly far from those proxy servers)
[13:08:37] <Diplomat> Main server is serving more than 500 requests per hour
[13:09:13] <Diplomat> I know file system is the best for that, but I want to have more control over this stuff, that's why I decided to use a database
[13:09:22] <StephenLynx> hm
[13:09:46] <StephenLynx> your main file could still check and return 304 codes though.
[13:09:48] <StephenLynx> main server*
[13:10:58] <StephenLynx> and 500 requests per hour is not much. that is a little more than 8 requests per minute.
[13:10:59] <Diplomat> Well true, but if the main site starts lagging it would slow down the performance too
[13:11:10] <Diplomat> main server*
[13:11:23] <Diplomat> and as I said above it servers much more than 500 per hour
[13:13:04] <StephenLynx> if you were to take a whole second on each request, you could still serve 3600 requests per hour.
[13:13:54] <Diplomat> Main server is serving about 50k requests per hour
[13:14:06] <StephenLynx> ah
[13:14:30] <StephenLynx> yeah, that is much, much, much more than the 500 you said before.
[13:14:34] <Diplomat> It's much easier to cache those things somewhere else for easy access
[13:14:59] <Diplomat> LOL yes, but that proxy server that I'm testing gets about 500 requests per hour and I have many proxy servers
[13:15:20] <StephenLynx> you mentioned redis. that is what imgur uses. I don't think mongo can handle this task better.
[13:15:35] <StephenLynx> there was any reason for the change from redis to mongo?
[13:16:23] <StephenLynx> you could use gridfs, I think that's what is called, so mongo has something to handle files.
[13:16:27] <StephenLynx> but is not it's main focus.
[13:16:28] <Diplomat> It's clustering.. I need to host more info and use less requests and I need it to be safe when something shuts down for no reason
[13:16:52] <StephenLynx> so you are not just storing files in the db?
[13:16:55] <Diplomat> Redis has clustering too, but it's fairly new and I'm a bit afraid
[13:17:05] <Diplomat> Yes, some info + files
[13:17:48] <StephenLynx> you could try gridfs.
[13:18:15] <StephenLynx> and it allows for streaming too, io.js is able to stream files from it.
[13:18:25] <StephenLynx> so it won't hog your RAM.
[13:18:25] <Diplomat> Basically I'm looking for a solution that would serve stuff from memory, but would be able to save it to SSD if required. Also it should be able to share info between nodes (clustering)
[13:19:02] <Diplomat> Also it would be nice if I could use more than k/v like redis has.. so I can for example query a node to delete only stuff that matches X
[13:19:37] <Diplomat> Redis has DEL *X* but .. I haven't figured it out yet how to get it working with my system
[13:19:52] <StephenLynx> if serving from memory is fine, and you have enough RAM for these files, you could just not use a DB for this cache and store on RAM using application code.
[13:20:23] <StephenLynx> and keep the misc information on mongo.
[13:21:01] <Diplomat> Yes, but I need it to be there if the cache server crashes or main server crashes or whatever crashes and only be removed when it's time to get removed (controlled by TTL)
[13:21:27] <StephenLynx> so you need a dedicated cache server?
[13:22:08] <Diplomat> Basically every proxy server would have a cache server that supports it
[13:22:39] <Diplomat> and I'm not able to store all this stuff in proxy servers
[13:38:03] <Freman> so... outside of zfs and bttrfs any other easy way to get fs snapshotting that anyone knows of?
[13:42:10] <Hilli> lvm
[13:42:55] <Hilli> Buy a NetApp, unless you are performance oriented :)
[13:58:29] <Hilli> Freman: Jearh, depends on distribution regarding how easy that is to handle.
[13:58:50] <Freman> gentoo, I could probably use grub2... but meh
[14:00:10] <Hilli> In that case: Can't you skip snapshotting /? Thats probably not where you have your data anyhow.
[14:04:01] <isale-eko> http://stackoverflow.com/questions/29124077/use-mongodbmongoid-aggregate-framework-to-query-user-defined-filters-or-rules
[14:38:25] <pjwal> I'm pulling my hair out, for some reason I cannot get a new secondary to completely sync, it leaves out a # of collections and documents completely
[14:38:43] <pjwal> Has anyone ever come across this?
[14:40:27] <pjwal> Another secondary with the exact same configuration (on a different OS and filesystem) did not have any trouble initially syncing all data
[14:40:59] <pjwal> The new secondary is on ubuntu ext4, thus far our replica set member have all been amazon linux
[14:41:58] <pjwal> v 3.0.1, using wiredTiger, same behavior with both snappy and none compression
[14:49:21] <latestbot> Is there any way I can update only one field in a given document?
[14:50:57] <pjwal> latestbot: $set?
[14:52:16] <pjwal> latestbot: db.foo.update({ _id: 42 }, { $set: { field_a: 'value_b' } })
[14:52:35] <latestbot> thanks, pjwal!
[15:11:30] <Diplomat> Guys, if I add a replication node.. how fast it gets replicated ?
[15:12:28] <cheeser> depends on so many factors.
[15:12:45] <cheeser> network speed, disk speed, machine loads, the amount of data, ...
[15:13:13] <Diplomat> 10gbps network, SSD disk, fairly low load and amount of data.. maybe 1kb or less
[15:13:36] <cheeser> might already be done then. ;)
[15:13:54] <Diplomat> I'm testing here and for some reason it doesnt replicate :/
[15:14:32] <Diplomat> I guess I missed something, gotta check everything again
[15:17:16] <StephenLynx> I want to set up auth on a server, where is the configuration file?
[15:24:53] <StephenLynx> I set auth to true, but I can't login using an user and password on localhost
[15:44:10] <eren> hello, I'm getting "too much data for sort() with no index" error. I checked the indexses on system-wide and it looks like index is in place
[15:44:15] <eren> how can I further investigate this problem?
[15:44:27] <eren> I have no previous mongodb experience, and I have root access to the machine. Any help is appreciated
[15:45:41] <ikb> try to limit the query and check if you get the same error
[15:48:02] <eren> ikb: I guess I'm not able to do it. It's an already deployed app where I cannot chance the codebase
[15:49:22] <ikb> can't you try the same query but using the mongo shell
[15:50:01] <eren> ikb: yeah, let me dig the code and find the query
[15:50:17] <eren> ikb: it's called "ceilometer" which is a part of openstack
[15:52:42] <eren> oh, it uses pymongo
[15:53:14] <eren> ikb: which part of the query are you asking for? It looks like there is no limit defined in the query and it tries to get all the thing
[15:53:16] <eren> +s
[15:53:36] <eren> in ceilometer, there are resources (a lot of them), and it tries to get the whole
[15:55:03] <ikb> yes and i was suggesting to try to add a limit
[15:55:11] <ikb> to check if you get the same error
[15:58:38] <eren> ikb: there are sorting keys but query string is empty, strange
[16:01:42] <basichash> If i have a range of products that each havea category and each category has many subcategories, how would I model this?
[16:01:59] <GothAlice> basichash: Can your subcategories have subcategories?
[16:02:24] <girb1> Hi , is there a way I can authenticate at first time and I can switch to any DB ?
[16:02:48] <girb1> for user … please help new to mongo
[16:03:28] <GothAlice> girb1: Yes, authenticating against one database and having permissions on another isn't an uncommon way to set things up.
[16:03:40] <GothAlice> You're looking for the "authenticationDatabase" option on most commands.
[16:04:25] <girb1> GothAlice: ok
[16:04:35] <basichash> GothAlice: yeah, that's a possiblity. At most there'll be 3 tiers of categories
[16:05:03] <GothAlice> basichash: Then it becomes a fair bit more interesting to try to model. MongoDB has restrictions on deep nesting that make certain query and update operations difficult.
[16:06:25] <GothAlice> basichash: I got tired of repeatedly solving the "hierarchical structure" problem, so I wrote: https://gist.github.com/amcgregor/4361bbd8f16d80a44387 which may be of use (if your code is Python, or just as a basis for your own). It's an unlimited taxonomy system, with very efficient querying of the tree. (You wouldn't need to cache what I'm caching, i.e. acls and things, as mine was for a CMS.)
[16:06:44] <GothAlice> It also conforms to the jQuery DOM traversal and manipulation APIs. :)
[16:08:47] <pamp> hi Im using now ongo 3.0 with WT snappy. The results of my tests show that the queries are faster in version 2.6. It was not supposed to be faster in version 3.0?
[16:08:55] <GothAlice> basichash: Using this, you could write: Category(name="top", title="Top Level").save().append(Category(name="second", title="Second Level").save())
[16:10:12] <basichash> GothAlice: it embeds subcategories within the parent document?
[16:10:16] <GothAlice> pamp: MongoDB 3.0 features some improvements in locking (i.e. switching to lock-free algorithms for some operations), so in general it should outperform 2.6. WiredTiger additionally adds compression, which should increase IO throughput, and adds multithreading for improved resource utilization.
[16:11:16] <eren> ikb: I'm doing printf debugging, here is what I've found: there is no query, it's empty and there are sort instructions. This is a python code and sort_instructions variable looks like:
[16:11:28] <eren> [('user_id', -1), ('project_id', -1), ('last_sample_timestamp', -1)]
[16:11:48] <eren> and it calls: self.db.resource.find(query, sort=sort_instructions):
[16:11:50] <eren> where query is empty
[16:11:53] <GothAlice> basichash: No, each category is its own document using the Taxonomy mix-in. They simply store (and that code manages for you) a few bits of "relationship"-like data (parent ref, list of all parent refs, coalesced path, etc.) needed to query efficiently. (I.e. "children of X", "all descendants of X", "all ancestors of X", etc., etc.)
[16:12:43] <GothAlice> (And other queries, i.e. "all siblings of X", or "everything before/after X under that parent", …)
[16:14:07] <GothAlice> eren: Do you have an index that looks like: [('user_id', -1), ('project_id', -1), ('last_sample_timestamp', -1)] (including direction of sort, which is descending here?)
[16:14:32] <basichash> GothAlice: ah i got ya. pretty neat actually. I'm using node.js, so I'll have to port it across to javascript
[16:14:53] <basichash> GothAlice: and this is the best methods of handling tiered categories you think?
[16:15:39] <GothAlice> basichash: As a note, technically there should be locking around lines 60-61 with some way of "reserving" the next sort order increment, to prevent a minor edge case / race condition, but duplicate sort order isn't actually a huge problem in the general case.
[16:16:49] <GothAlice> basichash: https://gist.github.com/amcgregor/4ca2933d7728b53a832c is an example tree I'm currently storing for a CMS. The average query is sub-millisecond, so it's plenty efficient for smaller datasets. I haven't yet tested throwing in city's site… ^_^
[16:17:14] <eren> GothAlice: let me check
[16:17:50] <eren> GothAlice: I have indexes but direction is reverse
[16:18:03] <eren> { "v" : 1, "key" : { "resource_id" : 1, "user_id" : 1, "counter_name" : 1, "timestamp" : 1, "source" : 1 }, "ns" : "ceilometer.meter", "background" : false, "name" : "meter_idx" }
[16:18:23] <GothAlice> Hmm. Run the find, but add .explain() to the end of it and gist/pastebin the result?
[16:18:33] <GothAlice> MongoDB might be choosing not to use it due to the direction change.
[16:19:21] <eren> GothAlice: sure, I'm a noob here btw :) How can I run an empty query on that "resource" and putting ascending order?
[16:19:22] <GothAlice> Wait, no, that index won't be used.
[16:19:24] <GothAlice> resource_id is first.
[16:19:34] <eren> GothAlice: I'm gonna pastebin all the indexes
[16:20:09] <eren> GothAlice: http://paste.ubuntu.com/10621753/
[16:20:22] <GothAlice> db.resource.find({}).sort([…]).explain()
[16:20:37] <GothAlice> (This is likely easiest done from a mongo shell.)
[16:20:45] <eren> GothAlice: I'm in mongoshell, yeah, trying
[16:22:02] <GothAlice> db.resource.find({}).sort({…}).explain() # rather (note the object instead of array being passed to sort(). My bad. ^_^
[16:22:17] <eren> GothAlice: same error: http://paste.ubuntu.com/10621767/
[16:22:37] <eren> ok, we've isolated the query at least
[16:22:55] <GothAlice> Cool beans. Yeah, you don't have an index that can cover that sort at the moment.
[16:23:20] <eren> yeah, I don't know which fields I should add index though
[16:23:26] <eren> it seems that they are already present
[16:23:30] <eren> but for ascending
[16:23:55] <GothAlice> Notably, MongoDB requires an exact match in the order of the fields, or it can't use it. (Additionally, you can potentially save creating many small indexes, since MongoDB can only ever use one in a query, by re-using index "prefixes", e.g. the first and second field of a three field index, or just the first, are usable by themselves.
[16:24:39] <GothAlice> You have no index currently that covers, in any direction, user_id then project_id then last_sample_timestamp.
[16:24:55] <eren> you mean, an index with "user_id, project_id, last_sample_timestamp"
[16:24:59] <GothAlice> Yes.
[16:25:10] <eren> so, I should add an index with that order, I guess
[16:25:18] <GothAlice> MongoDB can only use a _single_ index per query, which means the index will need to "cover" as much of the query as possible. (Both filtering, and sorting.)
[16:25:49] <GothAlice> For further reading, see: http://docs.mongodb.org/manual/core/index-compound/
[16:25:59] <basichash> GothAlice: thanks for that. Just out of interest, how many documents were there during your tests?
[16:26:27] <GothAlice> basichash: That gist with the tree is them all, for the minimal test site I'm hacking on.
[16:26:49] <eren> GothAlice: thanks, are we sure that if I add an index for those values, it will at least use this index and hopefully not complain?
[16:27:51] <basichash> GothAlice: got it
[16:28:50] <GothAlice> eren: Nope. But it's worth a test. ;) It clearly can't continue without one at the moment…
[16:29:05] <GothAlice> Nothing is ever certain. It makes life fun. :)
[16:29:25] <GothAlice> (Luckily, if it doesn't, we can smack it with a dead trout and strongly hint that it should use that index.)
[16:30:51] <eren> GothAlice: ok, I'm reading the docs on creating an index
[16:31:15] <eren> GothAlice: I've run "db.resource.getIndexes()" and it looks like there is compount index which may be helpful as a sample config
[16:31:37] <StephenLynx> anyone ever used mongo with apache and remote connections? I can't connect to the remote server using the shell but with php it will give me this:
[16:31:40] <Diplomat> I added like 1600 records to the database and it was like 10 min ago and secondary server still has 0 records. Any ideas what might be wrong? Those records are small and network is fast
[16:32:02] <Diplomat> I can see my nodes in rs.conf and rs.status a and everything appears to be good
[16:32:17] <StephenLynx> http://pastebin.com/4TiFynHt
[16:32:34] <eren> GothAlice: http://dpaste.com/0FNME1W
[16:32:48] <kippi> Hi
[16:33:04] <eren> GothAlice: do you think I should create index for "-1" because it's query with -1 ?
[16:33:10] <kippi> I am using nodejs and mongo, however I keep on getting this error: No primary found in set
[16:33:26] <GothAlice> eren: Indeed, yes.
[16:33:40] <StephenLynx> in the shell I am able to connect using mongo 192.168.1.10/log -u log -p and then I inform the password "pass"
[16:33:59] <eren> GothAlice: trying, thanks a lot sir!
[16:34:02] <StephenLynx> i can connect with the shell, but not in php*
[16:34:12] <GothAlice> kippi: When configuring the connection in your app, don't specify a secondary as the server to connect to. When running in a replica set, specify as many of the servers as you can; then it can pick the right one to use as needed.
[16:36:32] <kippi> GothAlice: for example, I am doing this: var mongoClient = new MongoClient(new Server(['localhost', 27017],['ns-lv-ans-01', 27017],['thw-lv-ans-01', 27017]));
[16:37:43] <GothAlice> Hmm. I use the URL connection format.
[16:38:03] <kippi> do you have a example?
[16:38:28] <GothAlice> mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]] — yours would thus be: mongodb://localhost:27017,ns-lv-ans-01:27017,thw-lv-ans-01:27017/foodb?replicaSet=name
[16:38:54] <GothAlice> (Replacing name with the actual name of the replica set. This extra option is what triggers the client driver to scan for other members on startup, AFIK.)
[16:39:03] <GothAlice> See: https://mongodb.github.io/node-mongodb-native/driver-articles/mongoclient.html#the-url-connection-format
[16:43:27] <parallel21> Is there a way to rather than expicilty say all the fields we do not want to include in a find result, a way to only grab a single field without the whole document?
[16:44:27] <GothAlice> parallel21: http://showterm.io/d17f5453abd4a6df8dda6
[16:45:09] <GothAlice> When projecting, if you {field: 0}, you're excluding. If you {field: 1} you are choosing inclusion only. (Which still includes _id by default. To really get rid of that one, also include _id: 0 in your set, but otherwise you can't mix inclusions and exclusions.
[16:45:54] <parallel21> GothAlice: This is awesome
[16:45:59] <GothAlice> :)
[16:47:02] <parallel21> Thanks, makes my code much more terse
[16:48:30] <eren> GothAlice: I've created an index, using background, how can I check when the index is built or the status of the index?
[16:48:32] <eren> db.resource.createIndex({user_id: -1, project_id: -1, last_sample_timestamp: -1}, {background: true})
[16:49:39] <eren> oh, it appers to be working I guess
[16:49:46] <eren> checking the application where I originally got error
[16:50:13] <eren> oh yes, it's working
[16:50:24] <GothAlice> \o/
[16:50:41] <eren> but it takes waaaaay long time
[16:50:49] <GothAlice> Yup, you're doing an unbounded query.
[16:51:08] <GothAlice> Likely pulling in a copy of that entire collection's dataset. (Not very efficient.)
[16:51:41] <eren> GothAlice: yeah, ceilometer (openstack) is giving me headaches
[16:51:46] <eren> it's FUBAR
[16:52:00] <GothAlice> Well, unbounded and unprojected queries are also kinda FUBAR. :P
[16:52:26] <GothAlice> Surely you don't actually need to download the entire collection from the database server to your application server each time, eh?
[16:53:26] <eren> GothAlice: the project is the only solution to metering and billing, etc. I guess I will dig into code
[16:53:33] <eren> at least I will report the index problem
[16:54:20] <eren> http://dpaste.com/34K7TC4
[16:54:30] <eren> heh, it's using an index at least
[16:55:46] <GothAlice> eren: As a note, http://cl.ly/image/142o1W3U2y0x < this dashboard executes an exceedingly large number of queries to perform analytics across ~17K records (for the presented timeframes). It's fully rendered client-side in under a second, and generates in a few dozen milliseconds. (Every single query there is bounded and projects only the fields required.)
[16:56:25] <GothAlice> (MongoDB is typically only slow when misused. ;)
[16:57:15] <girb1> http://pastebin.com/b6ix3xTU added dsl_read and read role for test_data … but the dsl_read won't work(auth fails) on test_data
[16:57:39] <eren> GothAlice: yeah, same as any other database system :)
[16:57:57] <GothAlice> eren: Eh, some start out that way. :P
[16:59:02] <girb1> "role" : "readAnyDatabase" , user can be used to read any DB ?
[17:23:16] <justgreg> hey all, I'm getting 404 errors when I run apt-get update trying to connect to the repos from Ubuntu 14.04 lts 64-bit using the instructions here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[17:33:27] <justgreg> could someone please update this page: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ to reflect the issue raised here: https://groups.google.com/forum/#!topic/mongodb-user/iCs6pfXgUX8 ?
[17:37:01] <cheeser> you should file a jira, justgreg
[17:37:17] <cheeser> bringing it up in irc is highly unlikely to make a difference
[17:37:36] <justgreg> ty cheeser
[17:45:15] <isale-eko> http://stackoverflow.com/questions/29124077/use-mongodb-aggregate-framework-to-query-user-defined-filters-or-rules
[17:47:53] <pamp> which is more faster for reads in 3.0, WT or MMAPv1?
[17:53:39] <GothAlice> pamp: In the general case, WT.
[17:54:12] <GothAlice> pamp: See: http://www.mongodb.com/blog/post/performance-testing-mongodb-30-part-1-throughput-improvements-measured-ycsb
[17:56:08] <pamp> I'm not understand the results I am getting. In 16 different queries the runtime is always less in version 2.6
[17:56:54] <GothAlice> Could be MongoDB is picking different strategies than before, i.e. potentially changing which indexes it uses. It'd be worthwhile to get detailed explain()s of before and after for comparison.
[17:59:28] <pamp> That's what I'm doing. With the same indexes and data I'm checking the time with the "explain ()", and the runtime is always less in version 2.6
[18:00:01] <GothAlice> There's a bit more than just a duration in an explain. Are there any other notable differences? (Speed is just a symptom. We're looking for the cause.)
[18:01:07] <GothAlice> When in doubt, gist and get a second set of eyeballs. (And you can have multiple files in one gist, so it's even efficient. ;)
[18:02:54] <pamp> the indexes are the same, the number of analyzed and returned objects are the same. I see no difference except the runtime
[18:16:03] <epx998> can i run a secondary thats a different version of mongo as the primary, 2.2/4 primary with a 2.6 secondary?
[18:18:18] <cheeser> yes
[18:18:38] <cheeser> in fact, that's how you upgrade a cluster with "no" downtime: rolling upgrades
[18:25:02] <Astral303> If I am experiencing stalled/bottlenecked oplog inserts on a secondary using WT, what stats can I look at to help diagnose the issue?
[18:25:06] <Astral303> this is using 3.0.0
[18:25:30] <epx998> thats what i thought, my manager was giving me a hard time
[18:26:02] <epx998> i want to create a replica with the latest version, have it sync, force it to primary and boom. upgraded, but hes giving me a hard time
[18:44:51] <Astral303> filed https://jira.mongodb.org/browse/SERVER-17649
[19:12:26] <pqatsi> Its possible to - in a mongod with auth=yes - create a database that does not need authentication?
[22:02:08] <epx998> how do you keep a primary host from reclaiming primary after a stepdown
[22:02:32] <epx998> setting it to prority 0 didnt work
[22:04:10] <epx998> ah maybe i had my priorities backwards
[22:04:18] <epx998> wife says i do that quite a bit
[22:06:13] <epx998> what causes the DBClientCursor::init call() failed errors i see in the mongod cli?
[22:23:04] <Boomtime> epx998: was that error after you told the host to stepDown from being a primary?
[22:23:28] <Boomtime> if so, the server disconnected you, the shell reconnects automatically
[22:24:19] <Boomtime> when a replica-set member steps down from PRIMARY it disconnects ALL sockets, to ensure no transition states
[22:34:52] <epx998> ah ok
[22:35:05] <epx998> practicing a migration
[23:36:23] <acidjazz> hi, how is this wrong please, looks fine to me: > db.customer.update( { _id: ObjectId("54ef98170cc3767c288b456e") }, $rename: { name: "Demo Customer" });
[23:37:08] <acidjazz> hmm extra {} around $rename maybe
[23:38:16] <acidjazz> oh god it renames the field name
[23:39:01] <acidjazz> sigh
[23:39:08] <GothAlice> ...
[23:39:29] <GothAlice> I think there's an achievement you just unlocked, for doing that.
[23:39:30] <GothAlice> ^_^
[23:39:49] <GothAlice> (You want $set.)