PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 16th of March, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[05:53:26] <bdiu> is there an easy way to combine arrays in an aggregation?
[05:55:57] <bdiu> docs like {someArray:[1,3,5],otherVal:1}, {someArray[1,2,4],otherVal:2}, want my aggregate result to look like {someArray:[1,2,3,4,5],otherValSum:3}
[05:56:37] <bdiu> I can easily get to someArray:[[1,3,5],[1,2,4]],otherValueSum:3}
[05:56:42] <bdiu> not quite what I Want
[06:20:37] <joannac> bdiu: unwind, then $addToSet
[08:41:54] <random_pr> I have an update like { $inc: { 'something.0.whatever'}}. it works. but I want the 0 to be a variable. If I have { $inc: { somestringvariable: 1 }} it assumes somestringvariable is the name itself. How can I fix this?
[08:43:32] <random_pr> sorry that should be { $inc: { 'something.0.whatever': 1}}
[10:08:17] <pamp> is there any way to get with elemMatch, more than one field in an array??
[10:08:57] <pamp> something like this: db.net_top_flat.find({MoName:"UtranCell"},{MoName:1,P:{$elemMatch: {k:"_MeContext", k:"userLabel"}}})
[10:09:09] <pamp> only returns the last element
[10:27:38] <joannac> pamp: nope. if you need more than one item in an array, your array elements should be their own documents
[10:51:12] <pamp> hmm bad feature
[11:29:07] <joannac> pamp: why not return the whole array?
[12:00:38] <pamp> joannac: because the size of data, its an array with hundreds of elements
[12:01:41] <pamp> return hundreds when only need two or three does not seem a good idea
[12:16:18] <Flapp> hey, I'm having some problems with initializing my replica set
[12:16:32] <Flapp> the nodes can reach eachother, and I can start a mongo shell from the one host to the other
[12:16:50] <Flapp> but when I try to rs.add({host}) It tells me 'need most members up to reconfigure, not ok: }
[12:19:38] <cheeser> pastebin the output of rs.status()
[12:27:50] <Flapp> cheeser: http://pastebin.com/9yefjdAE
[12:28:11] <Flapp> it took me longer than suspected because my networkcard decided to lose it's ip-address ^^
[12:29:04] <cheeser> you have one member in your replSet?
[12:29:23] <Flapp> it's supposed to be three members
[12:29:41] <Flapp> test-telemetry1, test-telemetry2 and test-telemetry3
[12:29:48] <Flapp> I ran rs.initialize() on test-telemetry1
[12:29:52] <cheeser> what does rs.config() show?
[12:30:26] <Flapp> http://pastebin.com/whJvTDkb
[12:40:47] <Flapp> cheeser:
[12:54:31] <cheeser> Flapp: that looks "fine." you have the one member and it's primary so everything should work...
[12:54:46] <Flapp> but when adding the other hosts
[12:54:48] <cheeser> now, you might not be able to get away with adding just the one, though.
[12:54:49] <Flapp> in my set
[12:55:05] <Flapp> i want a replicationset
[12:55:36] <cheeser> the way i've seen it done is to save the output of rs.config() in a local var in your shell, modify it to include the 2 new hosts, then save it back with rs.config(doc)
[12:55:49] <cheeser> or, use mms automation and let it handle this for you.
[12:55:56] <Flapp> mms automation?
[12:56:02] <Flapp> and if so, what does rs.add() do then?
[13:01:03] <Flapp> cheeser: I have tried with a rs.reconfigure()
[13:01:13] <Flapp> but it still tells me 'Need most members up to reconfigure
[13:01:45] <cheeser> yeah, i don't know to tell you on that one. it all looks ok from here.
[13:01:50] <cheeser> maybe file a support ticket?
[13:02:05] <Flapp> when I connect with a mongo shell from the other hosts
[13:02:13] <Flapp> and try to runrs.status() it tells me "need to login"
[13:02:19] <Flapp> do you think this might be part of the problem?
[13:02:35] <cheeser> do you have auth enabled?
[13:02:40] <Flapp> I'm checking that right now
[13:03:38] <Flapp> let me try it with auth explicitly false
[15:13:07] <markand> hi guys
[15:13:32] <kippi> hey
[15:14:01] <kippi> I am trying to run this, http://pastebin.com/MNRpVrK9 however it's keeps returning Unsupported projection option: ref_host, any pointers?
[15:14:50] <markand> is there any plan to make mongodb available for arm?
[15:16:26] <markand> it's quite unpleasant to see that it's only for little-endian
[15:16:59] <cheeser> markand: https://jira.mongodb.org/browse/SERVER-1811
[15:18:14] <markand> oh yes I forgot about v8 !
[15:59:54] <markand> mongodb uses json as the network messages protocol ?
[16:00:27] <StephenLynx> json is not a protocol, but a format.
[16:00:38] <markand> thanks I know that
[16:00:53] <markand> let me ask again then
[16:01:07] <markand> mongodb uses json format as the mongodb network protocol ?
[16:01:19] <GothAlice> markand: No.
[16:01:23] <GothAlice> http://bsonspec.org/
[16:01:50] <GothAlice> It uses BSON, which can represent a number of extended types, and is binary packed. (Much more efficient. Using JSON itself would be a rather bad idea.)
[16:02:14] <markand> ah okay
[16:02:16] <GothAlice> Efficient enough that you can read data out of packed BSON directly using some C struct or pointer math. It's pretty boss.
[16:02:28] <markand> I was reading that : http://docs.mongodb.org/manual/reference/command/dropDatabase/
[16:02:43] <markand> and when I saw the { dropDatabase: 1 } I was guessing what it was
[16:02:53] <GothAlice> Many of the APIs use JSON-like notation.
[16:03:17] <cheeser> (note that json would require quotes around dropDatabase)
[16:03:22] <GothAlice> The interactive shell is JS-powered, but the syntax is pretty much the same in Python, too. (You just need to wrap string keys in quotes, to better match real JSON vs. bare JavaScript notation.)
[16:03:28] <GothAlice> cheeser: Heh.
[16:04:41] <GothAlice> markand: The end result is that all of that JSON or JS-like data structure is converted to BSON for transmission, and converted back when viewing the response, at least, in the mongo shell. (As mentioned, when writing C you can avoid deserialization of the responses.)
[16:04:43] <markand> GothAlice, why is it a bad idea to use JSON for networking?
[16:04:57] <markand> IMHO, since it's plaintext, you don't have to deal with endian
[16:05:08] <markand> of course it takes more bandwith though
[16:05:42] <cheeser> and the more expensive parsing on either end.
[16:06:04] <GothAlice> markand: {'hello': "world", "name": "bob"} — it's huge. It's impossible to stream load safely. It's missing length indicators (both null termination *and* pascal string-style length value) making it much, much more difficult to parse. (BSON includes both C and Pascal-style string formats in one.)
[16:06:31] <GothAlice> Hell, even XML provides CDATA to let you efficiently (and without needing any processing) stream load the text content of a tag.
[16:07:02] <markand> I should have a look at bson then
[16:07:16] <cheeser> i hate when formats do that. either length encode or \0 terminate. both seems redundant.
[16:07:28] <GothAlice> JSON is heavily restricted in type, requiring nesting structures with magic keys/values to preserve types. It defines numbers the way JavaScript does, which is a bit laughable. (I.e. you only have 48 bits of integer accuracy, since all numbers are floating point!)
[16:07:31] <markand> I use JSON for networking messages, but it's not very large application
[16:07:32] <cheeser> ajp does that. we hates it.
[16:07:40] <NoOutlet> Here's a video about converting JSON to BSON. https://www.youtube.com/watch?v=BfWnSxNQpYs&t=207
[16:08:26] <GothAlice> cheeser: Both is quite useful. Null termination for easy use with C string functions, and a size for efficient network buffering. I.e. "read 4 bytes" (get length), "read N bytes" from the length. No need for chunked buffering at all. (I.e. read 4K, is there a null? No? Read another 4K… then eventually split the buffer on the null. What insanity is this?!)
[16:08:54] <GothAlice> All of that results in a much lower likelyhood of buffer overflows, since all sizes are explicit.
[16:09:01] <GothAlice> (Which is the _only_ way to go for a network protocol.)
[16:09:03] <markand> the problem with direct read() is that you're in troubles if the compiler pack your structures differently
[16:09:19] <markand> and again, you're stuck with the endian too
[16:09:29] <cheeser> big endian everywhere!
[16:09:32] <GothAlice> Endianness is a non-issue in the modern eara.
[16:09:34] <GothAlice> *Era, even.
[16:09:41] <cheeser> at any rate, time for some Thai food.
[16:10:05] <GothAlice> markand: And no, the compiler will be packing no structures. BSON is a dynamic format. (Thus the pointer math bit.)
[16:10:58] <NoOutlet> I don't know about that, Alice. With all the little ARMs out there, isn't endianness becoming an issue again?
[16:11:31] <GothAlice> NoOutlet: Nope, since all these handy client API implementations will handle that nonsense for you, or should.
[16:12:07] <GothAlice> The last time I needed to actually worry about endianness on ARM was… on a 320x320 resolution color PalmPilot when I was writing assembly for it. ;)
[16:12:36] <NoOutlet> But endianness is the reason I can't install MongoDB on my Pi.
[16:12:53] <GothAlice> No, endianness is the reason you can't run a mongod server on your Pi.
[16:13:00] <GothAlice> The client code should operate fine.
[16:13:45] <GothAlice> There's also things like http://c-mobberley.com/wordpress/2013/10/14/raspberry-pi-mongodb-installation-the-working-guide/ — if you're willing to rip and tear a bit (i.e. dropping JS execution in the server completely), you can technically get it running.
[16:14:25] <NoOutlet> Yeah, but endianness is causing all that ripping and tearing to be necessary.
[16:14:57] <GothAlice> Well, no, Google is, technically. (V8…)
[16:15:28] <GothAlice> And all of their delicious JIT-ness.
[16:15:29] <NoOutlet> Hmm. Okay.
[16:16:33] <GothAlice> Also, because you want your database to be fast, MongoDB memory maps files and writes C structures directly to them. This means your on-disk datafiles on ARM won't be portable to x86. For the most part, this doesn't matter at all.
[16:16:41] <GothAlice> (If you're running a production database on a Pi, you're doing IT wrong.)
[16:17:19] <markand> lol
[16:17:46] <markand> that's why we use duktape instead of v8
[16:18:06] <NoOutlet> Absolutely, I agree with that. The Pi is just my always on server at home and I am running little experiments on it and development projects.
[16:19:14] <Derick> most of the drivers run just fine on ARM though
[16:19:18] <GothAlice> Yup.
[16:19:52] <GothAlice> And I already provided a "use at own risk" tutorial link on how to get the daemon running. (Can't over-stress what a bad idea this would be, but hey, it's worth an experiment. I love experiments! ;)
[16:22:31] <NoOutlet> Well, perhaps, I'll try it eventually. I sort of gave it up and just accepted that I could use SQLite for the specific thing I was doing.
[16:23:02] <GothAlice> Little known fact, SQLite is also schemaless.
[16:23:17] <GothAlice> Or at least, non-enforcing of it.
[16:24:24] <NoOutlet> ...Yeah! ..Uhh, that's why I chose it! Yep! That's why.
[16:25:09] <Derick> GothAlice: except for their auto increment ID, or something?
[16:25:50] <phutchins> Anyone have an idea why an app would loose its connection to a mongoc node and have to be restarted when you rs.remove("node:27017") a node from the replicaset?
[16:26:08] <phutchins> I've tested to try to replicate on a separate cluster but it works seamlessly as I would expect.
[16:26:15] <markand> sqlite is schema less but you still need to create tables before inserting values
[16:26:43] <phutchins> I'm gusesing it is somethign to do with the mongo configuration in the production cluster ( i do not have direct access to ). I've tested against 2.4 and 2.6.
[16:27:59] <GothAlice> Derick: http://showterm.io/b6fdcf707c60f582af593 < pardon the whaffling on INSERT, I haven't SQL'd in a long time. ;)
[16:29:07] <Derick> oh yeah, I know that bit. But I think the primary key is the exception
[16:39:54] <pamp> hi
[16:40:11] <pamp> what algorithm should I use for wiredTiger compression, snappy or zlib
[16:40:17] <pamp> ?
[16:40:25] <GothAlice> pamp: Which is more important to you and your application? Storage space, or speed?
[16:41:00] <pamp> speed
[16:41:04] <GothAlice> Then snappy.
[16:41:11] <GothAlice> (Which is the default for this reason. ;)
[16:41:46] <pamp> ok thanks
[16:44:43] <NoOutlet> Have you tested all of the different speeds and compressions with each one in your testing, GothAlice?
[16:45:49] <GothAlice> NoOutlet: Nope. I stuck with speedy in my tests. For a variety of reasons, I prefer defaults as a base for testing. Unfortunately, we're not going to be deploying WT at work, not until a few patch releases, I suspect.
[16:50:47] <phutchins> Anyone?
[16:52:26] <GothAlice> phutchins: No idea as to your particular situation, but in general, if a node in a cluster goes away the client driver should automatically attempt to reconnect. :/
[16:54:13] <phutchins> GothAlice: yeah. Thats my experience as well. And it does in my staging environment. I replicated once under a plain replica set but I think that may have been because I had a URI with multiple hosts and the driver handles that differently. I then set up sharding and updated to mongo 2.6 and didn't see the issue so I'd assumed that fixed the issue. I also increased timeouts for socket and connection. I
[16:54:19] <phutchins> thought that was the fix.
[16:54:37] <phutchins> Then I went back and ran all of the tests again (all against a sharded cluster where the app is connecting to a config node) and I can't replicate at all.
[16:55:33] <phutchins> Wondering if it was really the timeout and since the config node in our staigng environment has very little load it easily reconnects in the 1 second that is default for the driver. Maybe the production mongo config node is under higher load and is taking over the 1 second :-\
[16:55:45] <phutchins> Thats the only thing I can think of tho. We're scared to go back and test again because it brings down our app :).
[16:56:20] <GothAlice> Unfortunately, without access to the logs in the production cluster, you're FUBAR when it comes to figuring out what's going on.
[16:56:54] <phutchins> I may have the timeout changes pushed out and then try it in production and make sure that we capture the mongo logs at the time.
[16:57:11] <phutchins> Also will add some more logging to our app for the connection errors...
[17:05:16] <phutchins> GothAlice: thanks for the input...
[17:36:33] <bbclover> wow guys, mongodb seems really nice
[17:47:28] <StephenLynx> bbclover :3
[17:47:41] <StephenLynx> it's pretty good at what it does best.
[17:47:52] <bbclover> is it like the new big hip thing
[17:47:53] <GothAlice> And pretty reasonable at everything else, too. ;)
[17:47:57] <bbclover> can I use it as a replacement for oracle
[17:48:01] <StephenLynx> no
[17:48:08] <GothAlice> Depends.
[17:48:09] <bbclover> why
[17:48:24] <StephenLynx> you can't replace a relational db for mongo if you need to use its relational capabilities.
[17:48:36] <StephenLynx> mongo can't join or create foreign keys.
[17:48:53] <bbclover> so you can't have like many to one relationships?
[17:48:55] <GothAlice> However, MongoDB provides alternative modelling strategies, by virtue of being a potentially nested document-based database.
[17:49:16] <StephenLynx> as she said, you can have many to one by having sub documents.
[17:49:22] <GothAlice> bbclover: You can fake them, i.e. storing "references" as ObjectIds, but you need to fake the joins at the application level.
[17:49:26] <bbclover> I see
[17:49:45] <StephenLynx> that too. and those fake joins may consume way too much resources depending on how you query it.
[17:50:20] <GothAlice> The example I typically use for this relational comparison is that of a forum. In my MongoDB-backed forum software, replies to threads are stored within the thread itself. Threads store a reference to the forum/sub-forum they are in.
[17:50:57] <StephenLynx> IMO, a better example of where mongo falls short is using a join to get the name of a poster based on its id on the post.
[17:51:20] <GothAlice> (This arrangement chosen because when you're looking at a thread, you pretty much only want details of the thread, including the replies. Deleting a thread deletes the replies naturally. Querying for all threads in a forum is a "forum_id==blah" situation, also quite efficient. (Even if it's a "faked" join.)
[17:51:42] <GothAlice> That can be resolved by pre-aggregating (effectively caching) the values you care about.
[17:51:42] <StephenLynx> you would have to query for the posts, then query for the user ids, them place the user names on the objects based on the ids, all in the application.
[17:51:55] <GothAlice> That would be the naive approach, yes.
[17:51:58] <StephenLynx> yeah, but then you have lots of duplicated data
[17:52:07] <StephenLynx> and sometimes that is not feasible depending on the case.
[17:52:23] <StephenLynx> where you would use over 2 or 3 joins, for example
[17:52:41] <GothAlice> If that's the problem being run into, you're doing it wrong. ;)
[17:52:58] <StephenLynx> thats the kind of problem a relational db is good at.
[17:53:12] <StephenLynx> and sometimes you need to perform a multiple number of joins.
[17:53:35] <GothAlice> Also, our definitions of "duplication" differ in this instance. Caches for presentation (i.e. including the user's name with their ObjectId reference) or pre-aggregation to implement the initial queries aren't duplication, they're optimization.
[17:54:07] <GothAlice> (And optimization 100% automatically handled by the ODM layer I use.)
[17:54:30] <StephenLynx> ODM code is just code.
[17:54:41] <GothAlice> Indeed. Helps to not duplicate effort, though.
[17:54:50] <StephenLynx> it is duplicating under the hood.
[17:55:17] <GothAlice> Not at all. It's performing just enough effort to support the queries and presentation I require.
[17:55:19] <StephenLynx> my definition of duplication would be having more than one field duplicated. the first being the unique id for reference.
[17:55:40] <GothAlice> MongoDB doesn't ask "what's the purist way to store your data", it asks "how are you going to actually _use_ this data?"
[17:55:57] <StephenLynx> the point is that your application is performing this effort that could be performed at db level.
[17:56:07] <GothAlice> "could be"
[17:56:17] <GothAlice> Could be doesn't make my clients happy. "Does" makes my clients happy.
[17:56:20] <StephenLynx> and that means your application thread is being held up.
[17:56:26] <StephenLynx> which impacts performance.
[17:56:29] <GothAlice> Except that it's not.
[17:57:32] <StephenLynx> because your ODM performs these operations in an async fashion?
[17:57:38] <seb2point0> Hey everyone. Simple questionon about importing raw JSON data into a Mongoose Schema.model. How should my JSON file reference the ObjectId of the external item I am linking to? Tried these and none worked.
[17:57:39] <seb2point0> "item" : [
[17:57:40] <seb2point0> new ObjectId("55060d030406fd9b4e8b2c79")
[17:57:42] <seb2point0> ]
[17:57:43] <seb2point0> "item" : [
[17:57:45] <seb2point0> "55060d030406fd9b4e8b2c79"
[17:57:46] <seb2point0> ]
[17:57:48] <seb2point0> "item" : [{
[17:57:49] <seb2point0> "_id": "55060d030406fd9b4e8b2c79"
[17:57:49] <seb2point0> }]
[17:57:51] <GothAlice> seb2point0: Paste code using a paste service, or gist. Don't paste into the channel!
[17:57:55] <StephenLynx> pastebin pls :c
[17:57:58] <seb2point0> (sorry!!)
[17:58:31] <seb2point0> Didn’t think that would happen.
[17:58:55] <StephenLynx> probably is in your client. hextchat vanilla does not does that.
[17:58:59] <StephenLynx> doesn't do*
[17:59:10] <GothAlice> StephenLynx: I actually think it's happening at the bouncer level, for me.
[17:59:41] <GothAlice> (Happens in PMs, too, which is even more annoying.)
[18:02:36] <GothAlice> StephenLynx: We actually use a variety of methods for "related data lookup". Caches, with or without preemptive updating, though the cache is typically used for HTML fragments like links to records and such. (Things that require more than one value out of the target record.) CachedReferenceField for cases where we need join-like queries. (This also registers signal handlers to keep the values up-to-date.) The majority of those updates
[18:02:36] <GothAlice> are async, and the freshness of the data doesn't overly matter.
[18:03:45] <seb2point0> Here we go again :) Simple questionon about importing raw JSON data into a Mongoose Schema.model. How should my JSON file reference the ObjectId of the external item I am linking to? Tried these and none worked. http://pastebin.com/CM0WN2LH
[18:04:07] <GothAlice> seb2point0: ^_^ http://docs.mongodb.org/manual/reference/mongodb-extended-json/
[18:04:50] <GothAlice> What you have there isn't JSON, since ObjectId isn't a thing in that serialization format (it's JavaScript, not JSON). ObjectIds are additionally not strings, so the last version would give you some very unintended results.
[18:05:15] <GothAlice> http://docs.mongodb.org/manual/reference/mongodb-extended-json/#data_oid is the encoding you're looking for for ObjectIds.
[18:05:59] <seb2point0> Actually it’s not really json but rather a JS object variable inside a script
[18:06:47] <seb2point0> So I should use { "$oid": “d883k8ddkd…” } then?
[18:07:15] <GothAlice> OTOH, Mongoose. *shakes a fist* You're not using JSON at all, really. The JSON encoding might work, or it might not.
[18:07:45] <GothAlice> (Googling "mongoose mongodb extended json" without quotes returns zero useful results for me.)
[18:07:49] <bbclover> so does mongodb take up a lot of space? I was reading this: http://blogs.enterprisedb.com/2014/09/24/postgres-outperforms-mongodb-and-ushers-in-new-developer-reality/
[18:08:04] <GothAlice> Well, that title totally isn't link bait.
[18:09:45] <GothAlice> bbclover: https://blog.serverdensity.com/does-everyone-hate-mongodb/ — there will always be negative comparisons, and most of them should be taken with a giant grain of salt, as the majority of negative postings utterly failed to read the documentation, or use MongoDB in anything remotely like a sane way.
[18:11:26] <GothAlice> "MongoDB consumed 33% more the disk space" "PostgreSQL community member Alvaro Tortosa found that the MongoDB console does not allow for INSERT of documents > 4K." — bbclover: The article you linked is too old to be useful. The record limit hasn't been "4K" ever, and hasn't been "4MB" in a long, long time.
[18:12:39] <bbclover> I see
[18:16:41] <epx998> got a question, im super new to mongodb and have been asked to document doing some migration - question is
[18:16:56] <epx998> do i use the mongodb cli for setting something like a replica set?
[18:17:08] <GothAlice> bbclover: "Let's dump this relational dataset on a non-relational database, and see how crappy it's performance on a task it wasn't intended for will be! Yeah!" < Not a good premise for comparison. A comparison I *would* like to see is this replicated relationally: http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework — this includes all relevant sizes, query performance, etc.
[18:18:14] <StephenLynx> bbclover mongo used to pre-allocate space, but since patch 3.0 it doesn't do it anymore, so disk usage fell sharply, but this is just a guess, I am not sure.
[18:18:18] <GothAlice> epx998: If one is unfamiliar with the management processes (http://docs.mongodb.org has many, many tutorials) you could use https://mms.mongodb.com/ which is free for < 8 servers. It can manage MongoDB user credentials, replication, and even sharding setups for you.
[18:18:43] <GothAlice> StephenLynx: It still does it if MongoDB detects the underlying disks are slow, AFIK.
[18:19:12] <bbclover> I see
[18:19:45] <GothAlice> epx998: But otherwise yes, management can be done via an application you write for the purpose (like MMS) or from the interactive mongo shell.
[18:36:21] <blottoface> Is it possible to query the oplog of replica set members via mongos?
[18:39:00] <GothAlice> blottoface: AFIK you'll need to connect the application querying the oplog directly to one of the members, because the oplog is held in the 'local' database (which isn't replicated/sharded).
[18:40:08] <GothAlice> See the first paragraph of: http://docs.mongodb.org/manual/reference/local-database/
[18:41:21] <epx998> can a replica set be disabled without stopping mongodb? The docs im reading all say to shutdown the server then remove the replica set.
[18:42:33] <GothAlice> epx998: Yeah, demoting is an offline process due to the required configuration or command-line argument changes.
[18:43:18] <epx998> ouch.
[18:43:29] <blottoface> GothAlice: thanks... I was worried that was the case.
[18:44:12] <epx998> so even adding a new slave to the master requires a restart of the master, to ingest the new config?
[18:44:37] <cheeser> not a restart no. possibly a leader election, though.
[18:46:21] <epx998> i think what my boss wants is too wonky... lol
[18:57:29] <GothAlice> epx998: You can certainly add and remove members of a replica set while it is in operation.
[18:57:50] <GothAlice> epx998: I.e. http://docs.mongodb.org/manual/tutorial/expand-replica-set/
[18:58:03] <GothAlice> And: http://docs.mongodb.org/manual/tutorial/remove-replica-set-member/
[18:58:59] <GothAlice> Step 1 indicates clearly that the only node to shut down is the one you are removing. This won't impact the cluster if the node being removed is a secondary, and will call for immediate election if the node being removed was primary. (In-progress requests will be cancelled and the client drivers instructed to reconnect after election.)
[19:00:07] <GothAlice> Note that if your cluster ever shrinks to a single node, you're in for a bad time. (There can be no primary in a single-member replica set.)
[19:00:36] <blottoface> Is it safe to query the oplog from a secondary member of a replica set? I'm not too concerned with it being realtime up to date.
[19:00:48] <GothAlice> blottoface: Quite safe, yes.
[19:18:17] <saml> how do I do group by and limit only 3 docs per group?
[19:18:52] <saml> db.docs.find({tags:/foobar/})... wanna do group by each doc taht are tagged with /foobar/ (contains foobar) .. but only want to see 3 docs per tag
[19:19:16] <saml> db.docs.aggregate({$match:{tags:/foobar/}}, {$group:
[19:39:58] <mdavid613> hey all, the readWrite role should allow a user to create an index on a collection/db that they have access to, correct? I'm running into an issue with an x509 auth mongorestore and index creation on 2.6.8. I'm wondering if there is something I'm doing incorrectly?
[19:41:22] <mdavid613> I'm using the following cmdline to handle the restore and I'm getting a permissions error creating an index? Perhaps the x509 mongorestore call isn't authenticating correctly?
[19:41:22] <mdavid613> mongorestore -h mahouts --ssl --sslPEMKeyFile ~/certs/mycert.pem --drop -d dbName --authenticationMechanism MONGODB-X509 dump_dir
[19:41:38] <mdavid613> where mahouts = myhost (if I didn't typo)
[19:55:05] <mdavid613> bueler? :)
[20:04:00] <arussel> anyone knows how to run function in reactivemongo ?
[20:37:15] <arussel> eval is deprecated, what should you use instead ?
[20:38:12] <Boomtime> depends on what you're trying to do, what are you trying to do?
[20:38:29] <mdavid613> has anyone done a successful x509 mongorestore? I have a user with restore permissions and I'm still getting an authorization error for creating an index
[20:38:38] <arussel> Boomtime: run some javascript code that aggregate 3 collections
[20:39:40] <arussel> Boomtime: this would have been a stored procedure in a typical relational db
[20:42:24] <Boomtime> arussel: you are going to have to re-work your use case, you might be able to use 3 aggregations outputting to a collection which you then run another aggregation on
[20:43:50] <arussel> it is a kind of join on the 3 collections, having them all in a single collection won't help much joining them
[20:44:43] <arussel> I don't duplicate data because I need this to be done only once a week.
[20:46:19] <cheeser> aggregations don't support appending to/updating a collection yet.
[20:47:01] <arussel> I understand that most of the time you shoudn't use eval, but it is very usefull sometimes
[20:47:43] <arussel> how are you supposed to run daily maintenance job without it ?
[20:48:07] <GothAlice> arussel: Write a .js file, run it using "mongo somefile.js"
[20:48:37] <arussel> GothAlice: exactly what I want to do, but from my driver so using eval
[20:49:29] <cheeser> time for meds!
[20:49:48] <GothAlice> 10 minutes… then I'm off work and can self-medicate this eval out of her brain. ;)
[20:49:54] <arussel> I'm 'eval'ing a js file, not input from a user
[20:50:40] <arussel> I don't really see how 'mongo somefile.js' is different from this
[20:50:58] <GothAlice> One isn't code injection over a wire protocol, the other is.
[20:51:38] <StephenLynx> eval is not fine never.
[20:51:41] <GothAlice> In the case of 'mongo somefile.js' your mongod server doesn't even need JS enabled. The JS is evaluated in a sandbox environment within the "mongo" shell itself.
[20:51:47] <Boomtime> arussel: if you can "mongo somefile.js" then do that, it is totally supported
[20:55:56] <arussel> I'm clearly not enthusiastic about having to maintain code in my app and on the db server
[20:57:32] <GothAlice> arussel: Have you measured the performance difference between your previous eval(), running it in a mongo shell (which doesn't have to be on the db server!), or running it as a client application?
[20:57:53] <arussel> GothAlice: I still don't get it. If you don't have access (firewall), you can't do anything. If you have access, I'm fucked anyway
[20:58:09] <GothAlice> arussel: You have a client application that connects to the database, right?
[20:58:14] <arussel> yep
[20:58:20] <GothAlice> Then you can "connect" to mongod running there.
[20:58:23] <nickenchuggets> I'm trying to install mongodb using these instructions: http://docs.mongodb.org/master/tutorial/install-mongodb-on-debian/?_ga=1.251886735.765935461.1426537367 and when I get to the part where I execute sudo apt-get install -y mongodb-org it says: "E: Unable to locate package mongodb-org"
[20:58:31] <arussel> GothAlice: yes
[20:58:33] <GothAlice> Which means the 'mongo' shell can connect from your application host to the database host.
[20:58:46] <GothAlice> I.e. no need to store code on the DB server at all.
[20:59:52] <arussel> so instead of eval, install mongo on the app server and do a system call to mongo to run the js ?
[21:00:44] <mdavid6131> nickenchuggets: did you add the mongo repo according to the first 3 steps on http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu ?
[21:01:03] <nickenchuggets> mdavid6131: I did, I followed each previous step.
[21:01:28] <GothAlice> arussel: The mongo client, yes. (Might be separately installable, not sure.)
[21:01:39] <mdavid6131> and the sources on your system has the mongo repos listed in it? /etc/apt/sources.list.d/
[21:02:06] <nickenchuggets> mdavid6131: it does
[21:02:08] <GothAlice> arussel: However, you could also just perform the operations your JS does within your application. (It would be functionally identical to running 'mongo somescript.js' on the app server.)
[21:02:39] <GothAlice> (Just without the almost-as-evil-as-eval "system"/"popen" call. ;)
[21:03:04] <arussel> GothAlice: it would be would need far too many db call as I'm doing a join on 3 collections
[21:03:21] <arussel> that's why I wanted it run in the db
[21:04:00] <GothAlice> I perform some pretty long-running database queries in my migrations. (Things that can run for 20 minutes in a single tight loop.) I have never encountered "too many db calls."
[21:04:01] <mdavid6131> nickenchuggets: have you tried running apt-get with -t mongodb-org-3.0 ?
[21:04:23] <nickenchuggets> mdavid6131: I have not, but I did an aptitude search mongodb and just the standard, outdated packages show up
[21:04:37] <nickenchuggets> no mongodb-org, no mongodb-org-3.0, etc.
[21:04:55] <arussel> GothAlice: I'm wondering if mongeez isn't usign eval ...
[21:04:56] <mdavid6131> and after running apt-get update same thing?
[21:04:56] <NoOutlet> The apt-get update was run successfully?
[21:05:02] <nickenchuggets> NoOutlet: it was, yes
[21:05:20] <nickenchuggets> mdavid6131: I can see the mongodb sources in apt-get update
[21:05:47] <nickenchuggets> mdavid6131: just did another aptitude search after doing an apt-get update, and same thing, no mongodb-org, just mongodb (standard debian packages, which are old)
[21:06:08] <arussel> GothAlice: thanks for your input, I'll give it some thought
[21:06:15] <GothAlice> Oh. Mongeez.
[21:06:19] <mdavid6131> apt-cache search mongo ?
[21:06:43] <nickenchuggets> mdavid6131: no mongodb-org in there either
[21:07:24] <mdavid6131> well crap…I wish I was more of a ubuntu expert….sounds like there is an issue even getting the repo into your lists
[21:07:40] <nickenchuggets> I can see it in apt-get update
[21:07:45] <nickenchuggets> which is... strange
[21:09:04] <mdavid6131> I'm wondering if perhaps something is up with the repo that's not getting the package list correctly
[21:09:43] <nickenchuggets> oh, I know what's going on...
[21:09:47] <nickenchuggets> -_-
[21:09:49] <NoOutlet> Could you tell us the output of `cat /etc/apt/sources.list.d/mongodb-org-3.0.list`?
[21:09:55] <nickenchuggets> I chose 64bit, my OS is 32bit
[21:10:02] <NoOutlet> Ah.
[21:10:26] <mdavid6131> there we go! sweet!
[21:10:34] <nickenchuggets> I don't see any options for debian 7 32bit though :/
[21:11:35] <mdavid6131> you can always build from source
[21:11:37] <mdavid6131> takes a while
[21:11:48] <mdavid6131> in fact I did it last week
[21:12:00] <mdavid6131> for 2.6.8 to get SSL support since it's not on standard builds
[21:12:03] <NoOutlet> Or get a 64-bit machine...
[21:12:18] <nickenchuggets> is 32 bit being phased out or something?
[21:13:01] <NoOutlet> Along with sundials and print publications.
[21:13:09] <Derick> wired tiger doesn't support 32 bits IIRC
[21:13:20] <nickenchuggets> hmm, hahaha
[21:13:26] <nickenchuggets> okay... well, I guess I'll upgrade
[21:13:55] <parallel21> /join #freenas
[21:14:00] <parallel21> eep
[21:18:32] <GothAlice> NoOutlet: Can I quote you on that?
[21:18:34] <GothAlice> ;)
[21:18:46] <GothAlice> People have been predicting the demise of my humble sundial company for years!
[21:20:10] <NoOutlet> Hah, you may, but I warn you, in the age of the re-tweet, traditional "quotes" are a thing of the past as well.
[21:20:13] <Derick> i like sundials
[21:20:22] <Derick> especially interesting shaped ones
[21:20:32] <nickenchuggets> hmm, weird
[21:20:34] <nickenchuggets> virtualbox has no 64 bit options
[21:21:14] <GothAlice> http://cl.ly/image/283H0N3j2Z26 http://cl.ly/image/1n2P2X3X1G1z < really?
[21:22:19] <GothAlice> You may be running a 32-bit host OS, if it's not giving you the option for a 64-bit guest.
[21:22:24] <GothAlice> In which case: feel bad. ;)
[21:22:34] <nickenchuggets> my host OS is 64 bit
[21:26:25] <nickenchuggets> aaaand it doesn't recognize keyboard input
[21:26:46] <GothAlice> … sounds like you may have larger issues at play.
[21:28:12] <nickenchuggets> yep, I need to enable some virtualization option in my BIOS
[21:30:13] <GothAlice> Heh. Years ago I was provisioning some hardware to use as dom0 in a new cluster, and when I realized the machines (which we direct shipped to the datacenter) didn't have VTx enabled out of the factory I nearly flipped a desk. ^_^
[21:31:31] <mdavid6131> does anyone know why a user with the restore role would have no problem dropping collections, but is getting a permissions failure using createIndex?
[21:34:31] <GothAlice> mdavid6131: Are you attempting to restore any system DB collections that aren't included in the list described in the second paragraph here: http://docs.mongodb.org/manual/reference/built-in-roles/#restore
[21:34:43] <GothAlice> Pretend that was a question. ;)
[21:35:24] <mdavid6131> nope, I'm purely restoring a non-system db
[21:35:46] <mdavid6131> with the no create index option the same user is able to drop all of the appropriate collections
[21:36:13] <mdavid6131> but then when I run it again I'm getting "not authorized to create index"
[21:42:31] <mdavid6131> GothAlice: I appreciate you're asking. Is there a good avenue for support on this if I'm using this on a non-enterprise version?
[21:42:49] <mdavid6131> I'm almost through all of the x509 conversion and this is the last bit
[21:43:22] <GothAlice> A JIRA ticket, even for non-enterprise, may help. Be sure to include the command you used to generate the dump, the command you are using to restore, and the server log for the restore time period.
[21:43:48] <GothAlice> (Who knows, you may have found a real problem that affects others, too. ;)
[21:56:48] <mdavid6131> GothAlice: thanks much!
[22:03:39] <pasichnyk> I have a 2.6.8 cluster that i'm not quite ready to migrate to 3.0 yet, but I just fired up a few readonly/non-voting remote replicas for reporting, running 3.0. Auth is failing against these after they've fully synced up. Is this mixed version scenario not supported, or is there a step i'm missing here?
[22:04:04] <GothAlice> It's certainly not recommended.
[22:04:54] <GothAlice> See: http://docs.mongodb.org/v3.0/release-notes/3.0-compatibility/#security-changes
[22:05:02] <GothAlice> (And the rest of that page.)
[22:08:52] <pasichnyk> Ok, i was trying to take advantage of wiredTiger compression in teh short term on these reporting replicas, but sound like i need to just bite the bullet and upgrade the rest of the boxes in the cluster to 3.0 as well? If i do this, can i mix mmapv1 and wiredtiger between different hosts, or does it all need to match?
[22:09:04] <GothAlice> mdavid6131: You may also wish to browse that page; I didn't catch which version of MongoDB you are having issues with. http://docs.mongodb.org/v3.0/release-notes/3.0-compatibility/#sslv3-ciphers-disabled (and previous sections) deal with some SSL-related changes.
[22:09:28] <GothAlice> Hmm.
[22:09:40] <GothAlice> I would keep them the same within a replica set, due to how oplogs need to get replayed.
[22:10:09] <GothAlice> But if you also have a sharding overlay on that, the different replica set shards could differ, so you could use sharding keys to direct records at certain back-ends that way.
[22:10:14] <GothAlice> Interesting idea, pasichnyk.
[22:10:22] <pasichnyk> but the oplog entries are simply commands though, correct? So in theory they should replay fine across the different storage engines?
[22:10:37] <pasichnyk> i do not have any sharding currently
[22:11:44] <pasichnyk> @GothAlice, the replica synced up, and is in SECONDARY status (priority:0, votes:0, as i set it up), so all the data appears to have applied fine, just the auth thats failing... :/
[22:12:21] <pasichnyk> on a positive note, i'm seeing 9:1 storage size diff for my dataset. :)
[22:13:09] <GothAlice> What's your data size?
[22:13:29] <mdavid613> thanks GothAlice! I've submitted https://jira.mongodb.org/browse/TOOLS-662 :)
[22:13:42] <Progz> Hello, I've got a trouble on my mongodb server. My load average is increasing a lot !
[22:14:09] <Progz> I am at 1430 of loard average....
[22:14:35] <Progz> I have 1 database with 1200 collections.
[22:14:47] <GothAlice> pasichnyk: The link I provided indicates a removal of a legacy authentication model, and a migration to a new authentication model. You may be able to switch that secondary's authentication mechanism back to MONGODB-CR if needed. http://docs.mongodb.org/v3.0/release-notes/3.0-scram/
[22:15:14] <pasichnyk> @GothAlice this replicaset is 133gb on disk on 2.6, and 15gb on disk for 3.0
[22:15:42] <GothAlice> Progz: Is your MongoDB open to the internet? (I.e. do you use authentication? VLAN?)
[22:15:54] <GothAlice> pasichnyk: http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework may be of interest to you.
[22:16:08] <GothAlice> (Noting the data/index/disk size comparisons between approaches.)
[22:16:15] <pasichnyk> @GothAlice, thanks, i'll take a look at that blog.
[22:16:28] <Progz> GothAlice: yes auth is at true in the mongod.conf and I see connection only from my webserver.
[22:17:06] <GothAlice> Progz: Good to check that. Are you able to get in via SSH to execute commands?
[22:17:15] <pasichnyk> @GothAlice, on the SCRAM vs CR, i was under the impression that it would still work with the older user models in mongo. I had recently upgraded the primary cluster boxes from 2.4 to 2.6. Is it possible i'm running on some 2.4 specific users on 2.6, that isn't supported on 3.0?
[22:17:16] <Progz> And I can't connect without a login/password. I don't understand why my mongodb server is not consuming RAM... only 500 MB of my 15 GB
[22:17:21] <Progz> GothAlice: of course
[22:18:09] <GothAlice> pasichnyk: Correct, 2.4 functionalities were deprecated in 2.6, and removed in 3.0 where possible.
[22:18:25] <GothAlice> So, you *might* be able to mix 2.6 and 3.0, but you certainly can't mix 2.4 and 3.0.
[22:19:24] <pasichnyk> @GothAlice, yeah, i don't have any 2.4 anymore. I just wondering if there is some auth upgrade i need to run on my 2.6 box, to get it to migrate from anything 2.4 specific to 2.6, so that 3.0 replica actually works. That make sense?
[22:19:59] <GothAlice> pasichnyk: http://docs.mongodb.org/v3.0/release-notes/2.6-upgrade-authorization/
[22:20:37] <GothAlice> pasichnyk: Also, because your data size would seem to exceed possible RAM size, and you are wanting to use WT in a production environment, I must give you a few links. May I PM you so as to not spam the channel?
[22:21:03] <pasichnyk> @GothAlice sure.
[22:22:38] <Progz> GothAlice: I don't understand why mongodb is not using all the ram possible. My database got a size of 150 GB and the ram server is 15 GB.
[22:22:58] <GothAlice> Progz: What else is running on that machine?
[22:23:09] <Progz> nothing else.
[22:23:29] <GothAlice> Also, do you have any swap space configured? Swap is important for the Linux Kernel virtual memory manager in order for it to "confidently" hand out larger chunks of RAM.
[22:23:32] <Progz> basic stuff : ssh, fail2ban,rkhunter ... and mongodb
[22:24:07] <Progz> GothAlice: no swap on this machine.... It's an AWS VM.. they give us with no swap...
[22:24:50] <Progz> but I can add it :)
[22:25:28] <GothAlice> That's where "dd" comes in handy. ;)
[22:25:38] <Progz> xD indeed
[22:26:32] <GothAlice> However, if your IO is going *completely* bonkers, and you can't figure out why, if your data is EBS-backed and the back-end controller on Amazon's side died for any reason, that'd explain it. :/
[22:28:59] <Progz> GothAlice: don't understand the word 'bonkers', I'm french.
[22:29:27] <GothAlice> Crazy, unusual, or outrageous.
[22:30:18] <GothAlice> But you'd need to get in and run something like iotop to see where your IO time is going. (How many processes in the waiting state?)
[22:30:33] <Progz> yeah ok
[22:31:29] <Progz> netstat give 900 connectionin CLOSE_WAIT and 150 in ESTABLISHED. I am going to look the IO stats
[22:32:54] <GothAlice> … that's a fair number in CLOSE_WAIT. That's 900 closed connections in the last 240 seconds or 3.75 connections per second, averaged.
[22:33:28] <GothAlice> Inbound or outbound? On what port?
[22:34:15] <Progz> outbound, from :27017
[22:35:18] <Progz> not 240 for me => cat /proc/sys/net/ipv4/tcp_keepalive_time ==> 300
[22:35:39] <GothAlice> ^_^ Darn variables.
[22:35:43] <GothAlice> Could you paste that netstat output? (And IO stats?)
[22:35:49] <GothAlice> (Using pastebin/gist/etc.)
[22:36:46] <Progz> <@GothAlice> (Using pastebin/gist/etc.) ==> Of course, in 2015 is there people still pasting directly in a canal ?
[22:36:58] <GothAlice> Yes. Yes they do.
[22:37:03] <GothAlice> -_-;
[22:37:40] <Progz> netstat -an => http://pastebin.com/7AQ3vCC1
[22:38:40] <Progz> GothAlice: OK I got definetly a IO problem... you will see...
[22:39:07] <Progz> iotop => http://pastebin.com/8GGQVDDV
[22:39:22] <GothAlice> 1.128 is your app server? That's a fair amount of recvq backlog.
[22:39:44] <Progz> yes it 's the apache web server
[22:39:47] <GothAlice> Yeah, that _really_ looks like the backing volume failed you.
[22:39:56] <Progz> it is requesting my mongo server
[22:39:59] <GothAlice> Also, it really looks like you could invest in a connection cache. ;)
[22:40:10] <GothAlice> (/pool)
[22:40:16] <Progz> like memcached ?
[22:40:31] <Progz> brbr
[22:40:48] <GothAlice> No, something to let your app hold on to MongoDB connections instead of connecting, authenticating, then throwing it away on each request. :/
[22:42:56] <RingZer0> I am using ObjectRocket for a mongo host. I get this when I try show databases: "not authorized for command: listDatabases on database admin"
[22:43:31] <GothAlice> I know nothing of ObjectRocket, but it sounds like you haven't authenticated.
[22:43:37] <Progz> ok GothAlice, which one do you advise me ?
[22:43:46] <GothAlice> Progz: PHP?
[22:43:54] <RingZer0> GothAlice: this is my first time working with mongo. I get system.users when I do show collections;
[22:44:12] <Progz> Progz: indeed
[22:44:42] <Progz> GothAlice: indeed.
[22:45:59] <GothAlice> RingZer0: Ah, you're not authenticated as someone allowed to see other databases? Look for "listDatabases" on http://docs.mongodb.org/manual/reference/built-in-roles/ and make sure your user has one of the required roles. (Or a custom role that provides that permission.)
[22:46:39] <GothAlice> Progz: Hmm, MongoDB should do that already, if you're running PHP inside Apache. :/
[22:47:20] <Progz> GothAlice: ok... maybe it's because I am hosting a lot of website...
[22:47:35] <Derick> Progz: are you talking to a replicaset?
[22:47:45] <Progz> Derick: no
[22:47:49] <Derick> k
[22:47:59] <GothAlice> :|
[22:48:00] <Progz> There is no replication for the moment, only one instance of mongo
[22:48:03] <Derick> actually, even for standalone nodes it might make a difference.
[22:48:18] <Derick> Progz: make sure that you use the same hostname as what the mongod node identifies with
[22:48:50] <GothAlice> Oh. The current driver doesn't connection pool any more.
[22:48:52] <GothAlice> :|
[22:48:56] <Derick> GothAlice: it does
[22:49:00] <Derick> it's just a pool of one
[22:49:16] <GothAlice> http://php.net/manual/en/class.mongopool.php < someone might want to update some docs, then. ;)
[22:49:19] <Derick> it certainly reuses connections between PHP requests
[22:49:42] <Derick> GothAlice: it's correct, because it's not really connection *pooling*
[22:49:47] <Progz> xD
[22:49:49] <Derick> PHP land calls them "persistent connections"
[22:52:01] <GothAlice> Progz: Certainly looks like you've got some heavy queries going on there.
[22:52:20] <Progz> Like query not using indexex ?
[22:52:37] <Progz> s/indexex/indexes/
[22:53:25] <GothAlice> Progz: What's the result of mongotop -n 1 and mongostat -n 1?
[22:53:48] <GothAlice> That'll tell you which collections are being hammered, and what type of operations are doing the hammering.
[22:53:49] <Progz> Error parsing command line: unknown option -n :)
[22:54:11] <GothAlice> Hmm, well, run it for a bit without that, and ^C it after a set or two. ;)
[22:54:48] <GothAlice> Ah, that option's new.
[22:54:56] <Progz> I am in 2.6.8
[22:55:17] <GothAlice> Yeah, you'll have to run it for a bit and ^C to kill it.
[22:55:52] <Progz> error: { ok: 0.0, errmsg: "not authorized on admin to execute command { top: 1 }", code: 13 }
[22:56:03] <Progz> same with my username...
[22:56:12] <Progz> mongotop -u xXxXXX -p Xxxxx
[22:57:03] <GothAlice> Hmm.
[22:57:18] <GothAlice> Admin user?
[22:58:07] <Progz> GothAlice: I don't know the password... a coworker activate auth user last week and create a user for our websites
[22:58:22] <Progz> but I don't think he put a password on admin user
[22:58:24] <Progz> xD
[22:58:46] <GothAlice> Worth a try, I guess. However, if he didn't… and the internet can connect, that's the first thing anyone will try.
[22:59:10] <Progz> ok so passwd is not working ! so...
[22:59:39] <Progz> I can disable auth , it's not a production service for the moment
[22:59:39] <GothAlice> No password = "Thank you for donating your server to SKYNET. Your contribution will be remembered by your new robot overlords." ;^P
[23:00:06] <Progz> I try no password ^^ it's not working :)
[23:00:09] <preaction> s/remembered/ignored/
[23:01:01] <GothAlice> Progz: Disabling auth is just as bad as no password, but to do that you'd need to reboot anyway. The MongoDB logs afterwords should include slow query information by default I believe.
[23:01:27] <GothAlice> (s/reboot/restart the service/ — but, wau, dat load average)
[23:01:45] <Progz> GothAlice: this mongo server is only requestable from the webserver
[23:01:58] <Progz> and in do it just for your command ^^ not so mad after all
[23:02:37] <GothAlice> If you restart MongoDB, the connections will go away, and we'll lose the information we're wanting to examine.
[23:03:12] <Progz> ns total read write 2015-03-16T23:02:46 fortress.SeoDataUrlContent 211232ms 211232ms 0ms
[23:04:08] <Progz> is it the time for the request to be executed ?
[23:04:57] <GothAlice> … yeah. Your disk back-end is toast. You're going to want to reboot the VM, extra-double-super-check the volume, let MongoDB restore its journal, if it can, and have backups ready if it can't or the volume is otherwise damaged.
[23:07:11] <Progz> insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time *0 63 *0 *0 0 184|0 0 144g 289g 10.6g 3586 fortress:0.0% 0 472|1 530|0 21k 66m 579 00:05:44
[23:07:28] <Progz> erf too big for a paste...
[23:07:36] <Progz> sorry.
[23:07:37] <GothAlice> Progz: Yeah, please don't do that. ;)
[23:08:05] <Progz> Ok I am going to change disk, make a snapshot and hop on a new ssd.
[23:08:46] <Progz> Thanks a lot GothAlice
[23:08:52] <Progz> Bye bye
[23:09:24] <GothAlice> Uhm.
[23:09:43] <GothAlice> Ciao? ^_^
[23:09:50] <Progz> GothAlice: xD
[23:10:13] <Progz> I was wondering why he's going to tell me
[23:10:45] <Progz> As I say : "Au revoir"
[23:10:56] <Progz> ^^
[23:15:50] <epx998> would an empty admin db cause this error - Fri May 9 13:58:01 [conn1983] auth: couldn't find use admin, admin.system.users ?
[23:16:58] <nickenchuggets> well, it looks like I will never get to use 64 bit guest OSes with this motherboard.
[23:17:33] <epx998> what board?
[23:17:45] <mdavid6131> new computer time! :)
[23:17:57] <nickenchuggets> If only I had the money.
[23:18:15] <GothAlice> epx998: Having a missing or empty admin database would be somewhat catastrophic if one were wanting to use authentication.
[23:18:38] <epx998> just checking - someone asked me to look into the error and i saw the db was empty.
[23:19:46] <nickenchuggets> I guess I'll just install 32bit mongodb
[23:19:54] <epx998> nickenchuggets: I use 2 msi am1i boards as my xen server hosts, work pretty well. whole host costs maybe $300
[23:19:56] <nickenchuggets> until I upgrade my motherboard
[23:20:18] <nickenchuggets> I'll be upgrading my motherboard in the nebulous future anyway
[23:20:22] <nickenchuggets> this is an ancient board
[23:21:52] <pasichnyk> I have 4 reporting replicas to initialize, using wiredtiger storage (primary is on mmapv1). Is it generally faster to sync one, and then rsync the files to the others to prime them, or just let them all sync simultaneously?