[05:53:26] <bdiu> is there an easy way to combine arrays in an aggregation?
[05:55:57] <bdiu> docs like {someArray:[1,3,5],otherVal:1}, {someArray[1,2,4],otherVal:2}, want my aggregate result to look like {someArray:[1,2,3,4,5],otherValSum:3}
[05:56:37] <bdiu> I can easily get to someArray:[[1,3,5],[1,2,4]],otherValueSum:3}
[08:41:54] <random_pr> I have an update like { $inc: { 'something.0.whatever'}}. it works. but I want the 0 to be a variable. If I have { $inc: { somestringvariable: 1 }} it assumes somestringvariable is the name itself. How can I fix this?
[08:43:32] <random_pr> sorry that should be { $inc: { 'something.0.whatever': 1}}
[10:08:17] <pamp> is there any way to get with elemMatch, more than one field in an array??
[10:08:57] <pamp> something like this: db.net_top_flat.find({MoName:"UtranCell"},{MoName:1,P:{$elemMatch: {k:"_MeContext", k:"userLabel"}}})
[12:55:36] <cheeser> the way i've seen it done is to save the output of rs.config() in a local var in your shell, modify it to include the 2 new hosts, then save it back with rs.config(doc)
[12:55:49] <cheeser> or, use mms automation and let it handle this for you.
[15:14:01] <kippi> I am trying to run this, http://pastebin.com/MNRpVrK9 however it's keeps returning Unsupported projection option: ref_host, any pointers?
[15:14:50] <markand> is there any plan to make mongodb available for arm?
[15:16:26] <markand> it's quite unpleasant to see that it's only for little-endian
[16:01:50] <GothAlice> It uses BSON, which can represent a number of extended types, and is binary packed. (Much more efficient. Using JSON itself would be a rather bad idea.)
[16:02:16] <GothAlice> Efficient enough that you can read data out of packed BSON directly using some C struct or pointer math. It's pretty boss.
[16:02:28] <markand> I was reading that : http://docs.mongodb.org/manual/reference/command/dropDatabase/
[16:02:43] <markand> and when I saw the { dropDatabase: 1 } I was guessing what it was
[16:02:53] <GothAlice> Many of the APIs use JSON-like notation.
[16:03:17] <cheeser> (note that json would require quotes around dropDatabase)
[16:03:22] <GothAlice> The interactive shell is JS-powered, but the syntax is pretty much the same in Python, too. (You just need to wrap string keys in quotes, to better match real JSON vs. bare JavaScript notation.)
[16:04:41] <GothAlice> markand: The end result is that all of that JSON or JS-like data structure is converted to BSON for transmission, and converted back when viewing the response, at least, in the mongo shell. (As mentioned, when writing C you can avoid deserialization of the responses.)
[16:04:43] <markand> GothAlice, why is it a bad idea to use JSON for networking?
[16:04:57] <markand> IMHO, since it's plaintext, you don't have to deal with endian
[16:05:08] <markand> of course it takes more bandwith though
[16:05:42] <cheeser> and the more expensive parsing on either end.
[16:06:04] <GothAlice> markand: {'hello': "world", "name": "bob"} — it's huge. It's impossible to stream load safely. It's missing length indicators (both null termination *and* pascal string-style length value) making it much, much more difficult to parse. (BSON includes both C and Pascal-style string formats in one.)
[16:06:31] <GothAlice> Hell, even XML provides CDATA to let you efficiently (and without needing any processing) stream load the text content of a tag.
[16:07:02] <markand> I should have a look at bson then
[16:07:16] <cheeser> i hate when formats do that. either length encode or \0 terminate. both seems redundant.
[16:07:28] <GothAlice> JSON is heavily restricted in type, requiring nesting structures with magic keys/values to preserve types. It defines numbers the way JavaScript does, which is a bit laughable. (I.e. you only have 48 bits of integer accuracy, since all numbers are floating point!)
[16:07:31] <markand> I use JSON for networking messages, but it's not very large application
[16:07:40] <NoOutlet> Here's a video about converting JSON to BSON. https://www.youtube.com/watch?v=BfWnSxNQpYs&t=207
[16:08:26] <GothAlice> cheeser: Both is quite useful. Null termination for easy use with C string functions, and a size for efficient network buffering. I.e. "read 4 bytes" (get length), "read N bytes" from the length. No need for chunked buffering at all. (I.e. read 4K, is there a null? No? Read another 4K… then eventually split the buffer on the null. What insanity is this?!)
[16:08:54] <GothAlice> All of that results in a much lower likelyhood of buffer overflows, since all sizes are explicit.
[16:09:01] <GothAlice> (Which is the _only_ way to go for a network protocol.)
[16:09:03] <markand> the problem with direct read() is that you're in troubles if the compiler pack your structures differently
[16:09:19] <markand> and again, you're stuck with the endian too
[16:09:41] <cheeser> at any rate, time for some Thai food.
[16:10:05] <GothAlice> markand: And no, the compiler will be packing no structures. BSON is a dynamic format. (Thus the pointer math bit.)
[16:10:58] <NoOutlet> I don't know about that, Alice. With all the little ARMs out there, isn't endianness becoming an issue again?
[16:11:31] <GothAlice> NoOutlet: Nope, since all these handy client API implementations will handle that nonsense for you, or should.
[16:12:07] <GothAlice> The last time I needed to actually worry about endianness on ARM was… on a 320x320 resolution color PalmPilot when I was writing assembly for it. ;)
[16:12:36] <NoOutlet> But endianness is the reason I can't install MongoDB on my Pi.
[16:12:53] <GothAlice> No, endianness is the reason you can't run a mongod server on your Pi.
[16:13:00] <GothAlice> The client code should operate fine.
[16:13:45] <GothAlice> There's also things like http://c-mobberley.com/wordpress/2013/10/14/raspberry-pi-mongodb-installation-the-working-guide/ — if you're willing to rip and tear a bit (i.e. dropping JS execution in the server completely), you can technically get it running.
[16:14:25] <NoOutlet> Yeah, but endianness is causing all that ripping and tearing to be necessary.
[16:14:57] <GothAlice> Well, no, Google is, technically. (V8…)
[16:15:28] <GothAlice> And all of their delicious JIT-ness.
[16:16:33] <GothAlice> Also, because you want your database to be fast, MongoDB memory maps files and writes C structures directly to them. This means your on-disk datafiles on ARM won't be portable to x86. For the most part, this doesn't matter at all.
[16:16:41] <GothAlice> (If you're running a production database on a Pi, you're doing IT wrong.)
[16:17:46] <markand> that's why we use duktape instead of v8
[16:18:06] <NoOutlet> Absolutely, I agree with that. The Pi is just my always on server at home and I am running little experiments on it and development projects.
[16:19:14] <Derick> most of the drivers run just fine on ARM though
[16:19:52] <GothAlice> And I already provided a "use at own risk" tutorial link on how to get the daemon running. (Can't over-stress what a bad idea this would be, but hey, it's worth an experiment. I love experiments! ;)
[16:22:31] <NoOutlet> Well, perhaps, I'll try it eventually. I sort of gave it up and just accepted that I could use SQLite for the specific thing I was doing.
[16:23:02] <GothAlice> Little known fact, SQLite is also schemaless.
[16:23:17] <GothAlice> Or at least, non-enforcing of it.
[16:25:09] <Derick> GothAlice: except for their auto increment ID, or something?
[16:25:50] <phutchins> Anyone have an idea why an app would loose its connection to a mongoc node and have to be restarted when you rs.remove("node:27017") a node from the replicaset?
[16:26:08] <phutchins> I've tested to try to replicate on a separate cluster but it works seamlessly as I would expect.
[16:26:15] <markand> sqlite is schema less but you still need to create tables before inserting values
[16:26:43] <phutchins> I'm gusesing it is somethign to do with the mongo configuration in the production cluster ( i do not have direct access to ). I've tested against 2.4 and 2.6.
[16:27:59] <GothAlice> Derick: http://showterm.io/b6fdcf707c60f582af593 < pardon the whaffling on INSERT, I haven't SQL'd in a long time. ;)
[16:29:07] <Derick> oh yeah, I know that bit. But I think the primary key is the exception
[16:44:43] <NoOutlet> Have you tested all of the different speeds and compressions with each one in your testing, GothAlice?
[16:45:49] <GothAlice> NoOutlet: Nope. I stuck with speedy in my tests. For a variety of reasons, I prefer defaults as a base for testing. Unfortunately, we're not going to be deploying WT at work, not until a few patch releases, I suspect.
[16:52:26] <GothAlice> phutchins: No idea as to your particular situation, but in general, if a node in a cluster goes away the client driver should automatically attempt to reconnect. :/
[16:54:13] <phutchins> GothAlice: yeah. Thats my experience as well. And it does in my staging environment. I replicated once under a plain replica set but I think that may have been because I had a URI with multiple hosts and the driver handles that differently. I then set up sharding and updated to mongo 2.6 and didn't see the issue so I'd assumed that fixed the issue. I also increased timeouts for socket and connection. I
[16:54:37] <phutchins> Then I went back and ran all of the tests again (all against a sharded cluster where the app is connecting to a config node) and I can't replicate at all.
[16:55:33] <phutchins> Wondering if it was really the timeout and since the config node in our staigng environment has very little load it easily reconnects in the 1 second that is default for the driver. Maybe the production mongo config node is under higher load and is taking over the 1 second :-\
[16:55:45] <phutchins> Thats the only thing I can think of tho. We're scared to go back and test again because it brings down our app :).
[16:56:20] <GothAlice> Unfortunately, without access to the logs in the production cluster, you're FUBAR when it comes to figuring out what's going on.
[16:56:54] <phutchins> I may have the timeout changes pushed out and then try it in production and make sure that we capture the mongo logs at the time.
[16:57:11] <phutchins> Also will add some more logging to our app for the connection errors...
[17:05:16] <phutchins> GothAlice: thanks for the input...
[17:49:45] <StephenLynx> that too. and those fake joins may consume way too much resources depending on how you query it.
[17:50:20] <GothAlice> The example I typically use for this relational comparison is that of a forum. In my MongoDB-backed forum software, replies to threads are stored within the thread itself. Threads store a reference to the forum/sub-forum they are in.
[17:50:57] <StephenLynx> IMO, a better example of where mongo falls short is using a join to get the name of a poster based on its id on the post.
[17:51:20] <GothAlice> (This arrangement chosen because when you're looking at a thread, you pretty much only want details of the thread, including the replies. Deleting a thread deletes the replies naturally. Querying for all threads in a forum is a "forum_id==blah" situation, also quite efficient. (Even if it's a "faked" join.)
[17:51:42] <GothAlice> That can be resolved by pre-aggregating (effectively caching) the values you care about.
[17:51:42] <StephenLynx> you would have to query for the posts, then query for the user ids, them place the user names on the objects based on the ids, all in the application.
[17:51:55] <GothAlice> That would be the naive approach, yes.
[17:51:58] <StephenLynx> yeah, but then you have lots of duplicated data
[17:52:07] <StephenLynx> and sometimes that is not feasible depending on the case.
[17:52:23] <StephenLynx> where you would use over 2 or 3 joins, for example
[17:52:41] <GothAlice> If that's the problem being run into, you're doing it wrong. ;)
[17:52:58] <StephenLynx> thats the kind of problem a relational db is good at.
[17:53:12] <StephenLynx> and sometimes you need to perform a multiple number of joins.
[17:53:35] <GothAlice> Also, our definitions of "duplication" differ in this instance. Caches for presentation (i.e. including the user's name with their ObjectId reference) or pre-aggregation to implement the initial queries aren't duplication, they're optimization.
[17:54:07] <GothAlice> (And optimization 100% automatically handled by the ODM layer I use.)
[17:57:32] <StephenLynx> because your ODM performs these operations in an async fashion?
[17:57:38] <seb2point0> Hey everyone. Simple questionon about importing raw JSON data into a Mongoose Schema.model. How should my JSON file reference the ObjectId of the external item I am linking to? Tried these and none worked.
[17:59:10] <GothAlice> StephenLynx: I actually think it's happening at the bouncer level, for me.
[17:59:41] <GothAlice> (Happens in PMs, too, which is even more annoying.)
[18:02:36] <GothAlice> StephenLynx: We actually use a variety of methods for "related data lookup". Caches, with or without preemptive updating, though the cache is typically used for HTML fragments like links to records and such. (Things that require more than one value out of the target record.) CachedReferenceField for cases where we need join-like queries. (This also registers signal handlers to keep the values up-to-date.) The majority of those updates
[18:02:36] <GothAlice> are async, and the freshness of the data doesn't overly matter.
[18:03:45] <seb2point0> Here we go again :) Simple questionon about importing raw JSON data into a Mongoose Schema.model. How should my JSON file reference the ObjectId of the external item I am linking to? Tried these and none worked. http://pastebin.com/CM0WN2LH
[18:04:50] <GothAlice> What you have there isn't JSON, since ObjectId isn't a thing in that serialization format (it's JavaScript, not JSON). ObjectIds are additionally not strings, so the last version would give you some very unintended results.
[18:05:15] <GothAlice> http://docs.mongodb.org/manual/reference/mongodb-extended-json/#data_oid is the encoding you're looking for for ObjectIds.
[18:05:59] <seb2point0> Actually it’s not really json but rather a JS object variable inside a script
[18:06:47] <seb2point0> So I should use { "$oid": “d883k8ddkd…” } then?
[18:07:15] <GothAlice> OTOH, Mongoose. *shakes a fist* You're not using JSON at all, really. The JSON encoding might work, or it might not.
[18:07:45] <GothAlice> (Googling "mongoose mongodb extended json" without quotes returns zero useful results for me.)
[18:07:49] <bbclover> so does mongodb take up a lot of space? I was reading this: http://blogs.enterprisedb.com/2014/09/24/postgres-outperforms-mongodb-and-ushers-in-new-developer-reality/
[18:08:04] <GothAlice> Well, that title totally isn't link bait.
[18:09:45] <GothAlice> bbclover: https://blog.serverdensity.com/does-everyone-hate-mongodb/ — there will always be negative comparisons, and most of them should be taken with a giant grain of salt, as the majority of negative postings utterly failed to read the documentation, or use MongoDB in anything remotely like a sane way.
[18:11:26] <GothAlice> "MongoDB consumed 33% more the disk space" "PostgreSQL community member Alvaro Tortosa found that the MongoDB console does not allow for INSERT of documents > 4K." — bbclover: The article you linked is too old to be useful. The record limit hasn't been "4K" ever, and hasn't been "4MB" in a long, long time.
[18:16:41] <epx998> got a question, im super new to mongodb and have been asked to document doing some migration - question is
[18:16:56] <epx998> do i use the mongodb cli for setting something like a replica set?
[18:17:08] <GothAlice> bbclover: "Let's dump this relational dataset on a non-relational database, and see how crappy it's performance on a task it wasn't intended for will be! Yeah!" < Not a good premise for comparison. A comparison I *would* like to see is this replicated relationally: http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework — this includes all relevant sizes, query performance, etc.
[18:18:14] <StephenLynx> bbclover mongo used to pre-allocate space, but since patch 3.0 it doesn't do it anymore, so disk usage fell sharply, but this is just a guess, I am not sure.
[18:18:18] <GothAlice> epx998: If one is unfamiliar with the management processes (http://docs.mongodb.org has many, many tutorials) you could use https://mms.mongodb.com/ which is free for < 8 servers. It can manage MongoDB user credentials, replication, and even sharding setups for you.
[18:18:43] <GothAlice> StephenLynx: It still does it if MongoDB detects the underlying disks are slow, AFIK.
[18:19:45] <GothAlice> epx998: But otherwise yes, management can be done via an application you write for the purpose (like MMS) or from the interactive mongo shell.
[18:36:21] <blottoface> Is it possible to query the oplog of replica set members via mongos?
[18:39:00] <GothAlice> blottoface: AFIK you'll need to connect the application querying the oplog directly to one of the members, because the oplog is held in the 'local' database (which isn't replicated/sharded).
[18:40:08] <GothAlice> See the first paragraph of: http://docs.mongodb.org/manual/reference/local-database/
[18:41:21] <epx998> can a replica set be disabled without stopping mongodb? The docs im reading all say to shutdown the server then remove the replica set.
[18:42:33] <GothAlice> epx998: Yeah, demoting is an offline process due to the required configuration or command-line argument changes.
[18:58:59] <GothAlice> Step 1 indicates clearly that the only node to shut down is the one you are removing. This won't impact the cluster if the node being removed is a secondary, and will call for immediate election if the node being removed was primary. (In-progress requests will be cancelled and the client drivers instructed to reconnect after election.)
[19:00:07] <GothAlice> Note that if your cluster ever shrinks to a single node, you're in for a bad time. (There can be no primary in a single-member replica set.)
[19:00:36] <blottoface> Is it safe to query the oplog from a secondary member of a replica set? I'm not too concerned with it being realtime up to date.
[19:18:17] <saml> how do I do group by and limit only 3 docs per group?
[19:18:52] <saml> db.docs.find({tags:/foobar/})... wanna do group by each doc taht are tagged with /foobar/ (contains foobar) .. but only want to see 3 docs per tag
[19:39:58] <mdavid613> hey all, the readWrite role should allow a user to create an index on a collection/db that they have access to, correct? I'm running into an issue with an x509 auth mongorestore and index creation on 2.6.8. I'm wondering if there is something I'm doing incorrectly?
[19:41:22] <mdavid613> I'm using the following cmdline to handle the restore and I'm getting a permissions error creating an index? Perhaps the x509 mongorestore call isn't authenticating correctly?
[20:04:00] <arussel> anyone knows how to run function in reactivemongo ?
[20:37:15] <arussel> eval is deprecated, what should you use instead ?
[20:38:12] <Boomtime> depends on what you're trying to do, what are you trying to do?
[20:38:29] <mdavid613> has anyone done a successful x509 mongorestore? I have a user with restore permissions and I'm still getting an authorization error for creating an index
[20:38:38] <arussel> Boomtime: run some javascript code that aggregate 3 collections
[20:39:40] <arussel> Boomtime: this would have been a stored procedure in a typical relational db
[20:42:24] <Boomtime> arussel: you are going to have to re-work your use case, you might be able to use 3 aggregations outputting to a collection which you then run another aggregation on
[20:43:50] <arussel> it is a kind of join on the 3 collections, having them all in a single collection won't help much joining them
[20:44:43] <arussel> I don't duplicate data because I need this to be done only once a week.
[20:46:19] <cheeser> aggregations don't support appending to/updating a collection yet.
[20:47:01] <arussel> I understand that most of the time you shoudn't use eval, but it is very usefull sometimes
[20:47:43] <arussel> how are you supposed to run daily maintenance job without it ?
[20:48:07] <GothAlice> arussel: Write a .js file, run it using "mongo somefile.js"
[20:48:37] <arussel> GothAlice: exactly what I want to do, but from my driver so using eval
[20:51:41] <GothAlice> In the case of 'mongo somefile.js' your mongod server doesn't even need JS enabled. The JS is evaluated in a sandbox environment within the "mongo" shell itself.
[20:51:47] <Boomtime> arussel: if you can "mongo somefile.js" then do that, it is totally supported
[20:55:56] <arussel> I'm clearly not enthusiastic about having to maintain code in my app and on the db server
[20:57:32] <GothAlice> arussel: Have you measured the performance difference between your previous eval(), running it in a mongo shell (which doesn't have to be on the db server!), or running it as a client application?
[20:57:53] <arussel> GothAlice: I still don't get it. If you don't have access (firewall), you can't do anything. If you have access, I'm fucked anyway
[20:58:09] <GothAlice> arussel: You have a client application that connects to the database, right?
[20:58:20] <GothAlice> Then you can "connect" to mongod running there.
[20:58:23] <nickenchuggets> I'm trying to install mongodb using these instructions: http://docs.mongodb.org/master/tutorial/install-mongodb-on-debian/?_ga=1.251886735.765935461.1426537367 and when I get to the part where I execute sudo apt-get install -y mongodb-org it says: "E: Unable to locate package mongodb-org"
[20:58:33] <GothAlice> Which means the 'mongo' shell can connect from your application host to the database host.
[20:58:46] <GothAlice> I.e. no need to store code on the DB server at all.
[20:59:52] <arussel> so instead of eval, install mongo on the app server and do a system call to mongo to run the js ?
[21:00:44] <mdavid6131> nickenchuggets: did you add the mongo repo according to the first 3 steps on http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu ?
[21:01:03] <nickenchuggets> mdavid6131: I did, I followed each previous step.
[21:01:28] <GothAlice> arussel: The mongo client, yes. (Might be separately installable, not sure.)
[21:01:39] <mdavid6131> and the sources on your system has the mongo repos listed in it? /etc/apt/sources.list.d/
[21:02:08] <GothAlice> arussel: However, you could also just perform the operations your JS does within your application. (It would be functionally identical to running 'mongo somescript.js' on the app server.)
[21:02:39] <GothAlice> (Just without the almost-as-evil-as-eval "system"/"popen" call. ;)
[21:03:04] <arussel> GothAlice: it would be would need far too many db call as I'm doing a join on 3 collections
[21:03:21] <arussel> that's why I wanted it run in the db
[21:04:00] <GothAlice> I perform some pretty long-running database queries in my migrations. (Things that can run for 20 minutes in a single tight loop.) I have never encountered "too many db calls."
[21:04:01] <mdavid6131> nickenchuggets: have you tried running apt-get with -t mongodb-org-3.0 ?
[21:04:23] <nickenchuggets> mdavid6131: I have not, but I did an aptitude search mongodb and just the standard, outdated packages show up
[21:04:37] <nickenchuggets> no mongodb-org, no mongodb-org-3.0, etc.
[21:05:20] <nickenchuggets> mdavid6131: I can see the mongodb sources in apt-get update
[21:05:47] <nickenchuggets> mdavid6131: just did another aptitude search after doing an apt-get update, and same thing, no mongodb-org, just mongodb (standard debian packages, which are old)
[21:06:08] <arussel> GothAlice: thanks for your input, I'll give it some thought
[21:26:25] <nickenchuggets> aaaand it doesn't recognize keyboard input
[21:26:46] <GothAlice> … sounds like you may have larger issues at play.
[21:28:12] <nickenchuggets> yep, I need to enable some virtualization option in my BIOS
[21:30:13] <GothAlice> Heh. Years ago I was provisioning some hardware to use as dom0 in a new cluster, and when I realized the machines (which we direct shipped to the datacenter) didn't have VTx enabled out of the factory I nearly flipped a desk. ^_^
[21:31:31] <mdavid6131> does anyone know why a user with the restore role would have no problem dropping collections, but is getting a permissions failure using createIndex?
[21:34:31] <GothAlice> mdavid6131: Are you attempting to restore any system DB collections that aren't included in the list described in the second paragraph here: http://docs.mongodb.org/manual/reference/built-in-roles/#restore
[21:34:43] <GothAlice> Pretend that was a question. ;)
[21:35:24] <mdavid6131> nope, I'm purely restoring a non-system db
[21:35:46] <mdavid6131> with the no create index option the same user is able to drop all of the appropriate collections
[21:36:13] <mdavid6131> but then when I run it again I'm getting "not authorized to create index"
[21:42:31] <mdavid6131> GothAlice: I appreciate you're asking. Is there a good avenue for support on this if I'm using this on a non-enterprise version?
[21:42:49] <mdavid6131> I'm almost through all of the x509 conversion and this is the last bit
[21:43:22] <GothAlice> A JIRA ticket, even for non-enterprise, may help. Be sure to include the command you used to generate the dump, the command you are using to restore, and the server log for the restore time period.
[21:43:48] <GothAlice> (Who knows, you may have found a real problem that affects others, too. ;)
[22:03:39] <pasichnyk> I have a 2.6.8 cluster that i'm not quite ready to migrate to 3.0 yet, but I just fired up a few readonly/non-voting remote replicas for reporting, running 3.0. Auth is failing against these after they've fully synced up. Is this mixed version scenario not supported, or is there a step i'm missing here?
[22:04:04] <GothAlice> It's certainly not recommended.
[22:05:02] <GothAlice> (And the rest of that page.)
[22:08:52] <pasichnyk> Ok, i was trying to take advantage of wiredTiger compression in teh short term on these reporting replicas, but sound like i need to just bite the bullet and upgrade the rest of the boxes in the cluster to 3.0 as well? If i do this, can i mix mmapv1 and wiredtiger between different hosts, or does it all need to match?
[22:09:04] <GothAlice> mdavid6131: You may also wish to browse that page; I didn't catch which version of MongoDB you are having issues with. http://docs.mongodb.org/v3.0/release-notes/3.0-compatibility/#sslv3-ciphers-disabled (and previous sections) deal with some SSL-related changes.
[22:09:40] <GothAlice> I would keep them the same within a replica set, due to how oplogs need to get replayed.
[22:10:09] <GothAlice> But if you also have a sharding overlay on that, the different replica set shards could differ, so you could use sharding keys to direct records at certain back-ends that way.
[22:10:22] <pasichnyk> but the oplog entries are simply commands though, correct? So in theory they should replay fine across the different storage engines?
[22:10:37] <pasichnyk> i do not have any sharding currently
[22:11:44] <pasichnyk> @GothAlice, the replica synced up, and is in SECONDARY status (priority:0, votes:0, as i set it up), so all the data appears to have applied fine, just the auth thats failing... :/
[22:12:21] <pasichnyk> on a positive note, i'm seeing 9:1 storage size diff for my dataset. :)
[22:13:42] <Progz> Hello, I've got a trouble on my mongodb server. My load average is increasing a lot !
[22:14:09] <Progz> I am at 1430 of loard average....
[22:14:35] <Progz> I have 1 database with 1200 collections.
[22:14:47] <GothAlice> pasichnyk: The link I provided indicates a removal of a legacy authentication model, and a migration to a new authentication model. You may be able to switch that secondary's authentication mechanism back to MONGODB-CR if needed. http://docs.mongodb.org/v3.0/release-notes/3.0-scram/
[22:15:14] <pasichnyk> @GothAlice this replicaset is 133gb on disk on 2.6, and 15gb on disk for 3.0
[22:15:42] <GothAlice> Progz: Is your MongoDB open to the internet? (I.e. do you use authentication? VLAN?)
[22:15:54] <GothAlice> pasichnyk: http://www.devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework may be of interest to you.
[22:16:08] <GothAlice> (Noting the data/index/disk size comparisons between approaches.)
[22:16:15] <pasichnyk> @GothAlice, thanks, i'll take a look at that blog.
[22:16:28] <Progz> GothAlice: yes auth is at true in the mongod.conf and I see connection only from my webserver.
[22:17:06] <GothAlice> Progz: Good to check that. Are you able to get in via SSH to execute commands?
[22:17:15] <pasichnyk> @GothAlice, on the SCRAM vs CR, i was under the impression that it would still work with the older user models in mongo. I had recently upgraded the primary cluster boxes from 2.4 to 2.6. Is it possible i'm running on some 2.4 specific users on 2.6, that isn't supported on 3.0?
[22:17:16] <Progz> And I can't connect without a login/password. I don't understand why my mongodb server is not consuming RAM... only 500 MB of my 15 GB
[22:18:09] <GothAlice> pasichnyk: Correct, 2.4 functionalities were deprecated in 2.6, and removed in 3.0 where possible.
[22:18:25] <GothAlice> So, you *might* be able to mix 2.6 and 3.0, but you certainly can't mix 2.4 and 3.0.
[22:19:24] <pasichnyk> @GothAlice, yeah, i don't have any 2.4 anymore. I just wondering if there is some auth upgrade i need to run on my 2.6 box, to get it to migrate from anything 2.4 specific to 2.6, so that 3.0 replica actually works. That make sense?
[22:20:37] <GothAlice> pasichnyk: Also, because your data size would seem to exceed possible RAM size, and you are wanting to use WT in a production environment, I must give you a few links. May I PM you so as to not spam the channel?
[22:22:38] <Progz> GothAlice: I don't understand why mongodb is not using all the ram possible. My database got a size of 150 GB and the ram server is 15 GB.
[22:22:58] <GothAlice> Progz: What else is running on that machine?
[22:23:29] <GothAlice> Also, do you have any swap space configured? Swap is important for the Linux Kernel virtual memory manager in order for it to "confidently" hand out larger chunks of RAM.
[22:23:32] <Progz> basic stuff : ssh, fail2ban,rkhunter ... and mongodb
[22:24:07] <Progz> GothAlice: no swap on this machine.... It's an AWS VM.. they give us with no swap...
[22:26:32] <GothAlice> However, if your IO is going *completely* bonkers, and you can't figure out why, if your data is EBS-backed and the back-end controller on Amazon's side died for any reason, that'd explain it. :/
[22:28:59] <Progz> GothAlice: don't understand the word 'bonkers', I'm french.
[22:29:27] <GothAlice> Crazy, unusual, or outrageous.
[22:30:18] <GothAlice> But you'd need to get in and run something like iotop to see where your IO time is going. (How many processes in the waiting state?)
[22:31:29] <Progz> netstat give 900 connectionin CLOSE_WAIT and 150 in ESTABLISHED. I am going to look the IO stats
[22:32:54] <GothAlice> … that's a fair number in CLOSE_WAIT. That's 900 closed connections in the last 240 seconds or 3.75 connections per second, averaged.
[22:33:28] <GothAlice> Inbound or outbound? On what port?
[22:40:48] <GothAlice> No, something to let your app hold on to MongoDB connections instead of connecting, authenticating, then throwing it away on each request. :/
[22:42:56] <RingZer0> I am using ObjectRocket for a mongo host. I get this when I try show databases: "not authorized for command: listDatabases on database admin"
[22:43:31] <GothAlice> I know nothing of ObjectRocket, but it sounds like you haven't authenticated.
[22:43:37] <Progz> ok GothAlice, which one do you advise me ?
[22:45:59] <GothAlice> RingZer0: Ah, you're not authenticated as someone allowed to see other databases? Look for "listDatabases" on http://docs.mongodb.org/manual/reference/built-in-roles/ and make sure your user has one of the required roles. (Or a custom role that provides that permission.)
[22:46:39] <GothAlice> Progz: Hmm, MongoDB should do that already, if you're running PHP inside Apache. :/
[22:47:20] <Progz> GothAlice: ok... maybe it's because I am hosting a lot of website...
[22:47:35] <Derick> Progz: are you talking to a replicaset?
[22:58:46] <GothAlice> Worth a try, I guess. However, if he didn't… and the internet can connect, that's the first thing anyone will try.
[22:59:10] <Progz> ok so passwd is not working ! so...
[22:59:39] <Progz> I can disable auth , it's not a production service for the moment
[22:59:39] <GothAlice> No password = "Thank you for donating your server to SKYNET. Your contribution will be remembered by your new robot overlords." ;^P
[23:00:06] <Progz> I try no password ^^ it's not working :)
[23:01:01] <GothAlice> Progz: Disabling auth is just as bad as no password, but to do that you'd need to reboot anyway. The MongoDB logs afterwords should include slow query information by default I believe.
[23:01:27] <GothAlice> (s/reboot/restart the service/ — but, wau, dat load average)
[23:01:45] <Progz> GothAlice: this mongo server is only requestable from the webserver
[23:01:58] <Progz> and in do it just for your command ^^ not so mad after all
[23:02:37] <GothAlice> If you restart MongoDB, the connections will go away, and we'll lose the information we're wanting to examine.
[23:04:08] <Progz> is it the time for the request to be executed ?
[23:04:57] <GothAlice> … yeah. Your disk back-end is toast. You're going to want to reboot the VM, extra-double-super-check the volume, let MongoDB restore its journal, if it can, and have backups ready if it can't or the volume is otherwise damaged.
[23:17:57] <nickenchuggets> If only I had the money.
[23:18:15] <GothAlice> epx998: Having a missing or empty admin database would be somewhat catastrophic if one were wanting to use authentication.
[23:18:38] <epx998> just checking - someone asked me to look into the error and i saw the db was empty.
[23:19:46] <nickenchuggets> I guess I'll just install 32bit mongodb
[23:19:54] <epx998> nickenchuggets: I use 2 msi am1i boards as my xen server hosts, work pretty well. whole host costs maybe $300
[23:19:56] <nickenchuggets> until I upgrade my motherboard
[23:20:18] <nickenchuggets> I'll be upgrading my motherboard in the nebulous future anyway
[23:20:22] <nickenchuggets> this is an ancient board
[23:21:52] <pasichnyk> I have 4 reporting replicas to initialize, using wiredtiger storage (primary is on mmapv1). Is it generally faster to sync one, and then rsync the files to the others to prime them, or just let them all sync simultaneously?