[01:20:09] <GothAlice> Hmm. Server load average is an indicator of a few things, and one place to start. If high, check the io stats and kernel log to see if your storage layer is failing.
[01:21:56] <GothAlice> (There's a higher-than-normal probability of storage issues on Amazon EC2 using EBS volumes, for one troublesome setup.)
[01:36:28] <jfhbrook> GothAlice: How high is high? How do you check io stats? I think I can google kernel log or at least get help with that
[01:37:44] <jfhbrook> what if the machine doesn't have iotop?
[01:38:47] <jfhbrook> yeah, I'm on the primary, no iotop
[01:39:22] <jfhbrook> It's not under the full load I'd expect right now but like worst case I know how to check for things when I *do* put it under load right?
[01:39:36] <jfhbrook> this might in fact be using EBS
[01:40:44] <GothAlice> jfhbrook: "uptime" gets you the current load average, amongst a few other stats.
[01:41:42] <jfhbrook> and it's a little late to be running experiments
[01:43:18] <jfhbrook> oh yeah, the db I'm loading right now (different one, seeing if I get repro on it) is like load average: 0.00, 0.01, 0.05
[01:43:27] <jfhbrook> so that doesn't sound like it
[01:43:34] <jfhbrook> what else is a good idea to check?
[01:43:48] <jfhbrook> I already checked to make sure I wasn't chewing through ports somehow, that looks fine
[01:44:38] <jfhbrook> I can be more specific if you want, I know more things but I'm trying to figure out how to think like somebody that has to fix mongo
[01:45:11] <jfhbrook> I'll follow up on that EBS idea tomorrow though
[01:46:50] <Boomtime> @jfhbrook: when you say "stops responding", do you mean it stops accepting writes, or just can't be logged in to at all (even with the mongo shell)?
[01:47:31] <jfhbrook> Boomtime: reads and writes from my api server either take very long or stop responding outright
[01:47:49] <jfhbrook> Boomtime: seeing if a parallel client also has problems around the same time is a good idea though
[01:47:54] <Boomtime> ok, which might just mean "got slow"
[01:48:03] <jfhbrook> order of minutes per query slow
[01:48:07] <Boomtime> have you checked the mongod.log file for around that time?
[01:49:08] <Boomtime> the interesting things probably occur before that
[01:49:09] <jfhbrook> then my API services disconnecting and reconnecting around the time my API service logs say "can't connect to stg-db01.mycompany.com"
[01:49:36] <jfhbrook> what kind of things are interesting?
[01:51:02] <jfhbrook> I saw what looked like mongodb queries and writes and not much else, but I also don't know what to look for
[01:52:42] <Boomtime> well, if your queries/writes just naturally take a little time, and your application just keeps sending more, and the queue backlog slowly grows, then eventually you'll get timeouts
[01:53:10] <jfhbrook> I'd see that in MMS somehow though, right?
[01:53:13] <Boomtime> thus, the "interesting" thing to look for will be similar looking queries/writes that are slowly taking longer each time they turn up
[10:26:10] <KekSi> hi there -- does anyone have a clue whether its possible to have multiple ssl certificates in a single PEMKeyFile (.pem) for mongod? so ip validation doesn't fail when i connect to my cluster on the local machine using 127.0.0.1 instead of the external IP?
[10:27:53] <KekSi> you generate the .pem file by concating cert and key into it so from that side it should be fine -- but does mongodb read beyond the first cert and key?
[10:38:31] <Garito> seems everyone is sleepy today
[11:54:07] <einyx> the init script for mongodb is badly broken
[11:54:21] <einyx> do you know if there is a fix on the way or something?
[11:58:30] <Garito> @einyx if you are using ubuntu they have change the boot process to systemd or something
[11:58:37] <Garito> perhaps you need to change too
[13:07:13] <jacksnipe> So I have users who have friends. What's the best way to store this? I'm concerned about putting the "friendship" in both user documents because I can't update both atomically...
[13:12:39] <StephenLynx> IMO, that asks for a relational db. how central is this feature?
[13:12:54] <jacksnipe> Yeah that's what I was thinking too
[13:13:15] <StephenLynx> if you really need what mongo does, you could use two databases in your system.
[13:13:26] <jacksnipe> yeah I've been considering that
[13:13:38] <jacksnipe> This particular feature isn't central, it's just a mirror of our users's facebook friends graph so we don't have to make a million api calls per session
[13:13:59] <StephenLynx> in that case, I would have an array of friends in the user document.
[13:14:20] <StephenLynx> since you can always fetch them from facebook, its ok if they desync sometimes.
[13:14:21] <jacksnipe> yeah I guess inconsistency isn't a huge deal when we can check against fb to reconstruct the graph whenever
[13:14:22] <deathanchor> StephenLynx, but jacksnipe doesn't want to update two docs
[13:14:42] <deathanchor> I would use another collection for managing friendships
[13:14:44] <jacksnipe> well the reason I don't want to update 2 docs is to avoid data inconsistency when my system inevitably gets fucked up in the middle of an operation
[13:14:57] <StephenLynx> which as we concluded, can be easily recovered.
[13:15:08] <jacksnipe> right so it should be pretty safe
[13:15:40] <jacksnipe> I considered the "linkage table", deathanchor, I just don't know how efficient it would be (read: that's what I'd be using pgsql for, not mongo)
[13:19:15] <jacksnipe> I'm willing to play catch up on maintaining consistency if it means that my queries are much, much faster (and that consistency CAN be rebuilt in big failure cases)
[13:19:40] <deathanchor> basically it becomes a big headache for me when activity writes are blocking me from creating/changing identity or from querying reads for it.
[13:20:06] <StephenLynx> but yeah, you did put too much stuff in a single document.
[13:20:36] <deathanchor> I blame lack of crystal ball for seeing the future issues :D
[13:20:55] <StephenLynx> foresight is not magical.
[13:21:06] <StephenLynx> it comes with experience and dedication to software design.
[13:21:22] <StephenLynx> but when the guy don't give a hoot about the system, he will just #YOLO it.
[13:21:38] <deathanchor> hehe... well if I had foresight I would have contacted the original designers before I started working here to make my future job easier :D
[13:22:14] <deathanchor> nah if I could see the future I would have invested in some stocks or lottery tickets :D
[13:22:58] <deathanchor> jacksnipe: Just forewarning that you should account for how often the pieces of data are written to in your docs and how often you read from it.
[13:23:27] <deathanchor> does the later mongo version still have reads blocked while doing writes and vise versa?
[14:02:42] <bjorn248> hello there, I understand a little bit about mongo storage (mainly the power of 2 allocation) but can someone explain this cyclical allocation and clearing of storage? https://www.dropbox.com/s/3rw13ck77vhs065/Screenshot%202015-06-18%2009.51.44.png?dl=0. You can see a 2GB allocation happen on 05/10 and 06/07, just am curious why it's cycling like that in between
[18:54:32] <saml> using j=true and fast writes and queries still fail
[18:54:49] <saml> Requiring journaled write concern in a replica set only requires a journal commit of the write operation to the primary of the set regardless of the level of replica acknowledged write concern.
[18:55:03] <saml> meh i guess there's no way to test integration
[19:47:40] <vruz> hello all, any idea when are the announced 3.2 improvements going to be available for the general public? I'm mostly interested in document integrity, and curious about "dynamic lookups", whatever those are.
[19:50:16] <brotatochip> hey guys, can anybody tell me if there is a built in role that grants access to the db.getReplicationInfo() method or, if not, help me to write a custom one for this?
[19:55:37] <brotatochip> This is what I've tried: db.createRole( { role: "testtRole", privileges: [ { resource:{ db: "admin", collection: "" }, actions: [ "getReplicationInfo" ] } ], roles: [] } )
[19:55:55] <brotatochip> It throws this error: Error: Unrecognized action privilege string: getReplicationInfo at src/mongo/shell/db.js:1347
[20:57:12] <greyTEO> cheeser, did they move it to a nw async driver?
[20:57:24] <greyTEO> or are the docs just making it look like that.
[21:00:06] <tty1> Can someone give me some help with this, I posted it to the mailing list but no one respondeD: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/mongodb-user/Cy8o4iHG6CU/IMhLDAT-pE8J
[21:00:13] <tty1> By the way, i recently posted this project. Its for running a MongoDB instance inside maven as a plugin (requires no setup or even installation of MongoDB before hand).. mostly to setup integration tests: https://github.com/Syncleus/maven-mongodb-plugin
[21:03:48] <cheeser> tty1: 1. 'sup? 2. i laughed when i saw your posting. here's my project: https://github.com/evanchooly/bottlerocket
[21:04:12] <cheeser> intended for using with junit/testng for spinning up clusters before test runs
[21:04:22] <cheeser> but suitable for any JVM app scenario, really.
[21:04:41] <tty1> cheeser: i wouldnt be against teaming up if you feel our projects might do better together?
[21:05:41] <tty1> cheeser: i mostly just wanted an easy way to get a mongodb instance running for testing.. but ti isnt really oriented towards clusters.. it does support ReplSetInitiate and --replSet but thats about it
[21:06:42] <tty1> cheeser: Also, if you deal with graph databases at all you might want to check out this other project of mine, it can use mongodb as a backend too. It is basically an ORM/OGM oriented towards graph databases and other backends: https://github.com/Syncleus/Ferma
[21:24:12] <whaley> I, at one point when I was still working on a mongo project, had that hooked in to my test harness for integration tests. I went the maven route, but that mostly sucked for ad-hoc running of tests inside of IDEs.
[21:25:02] <tty1> whaley: that is indeed the dependency that makes my plugin work
[21:25:20] <tty1> whaley: pretty much just configures and runs flapdoodle from a maven plugin, and shuts it down when the lifecycle is done
[21:25:22] <whaley> that is to say, I started with it running in maven via pre/post-integration
[21:25:47] <tty1> whaley: much easier to use since you dont have to actually code against the library directly.. though has its down sides too.. my plugin is probably more useful for integration tests than unit tests
[21:26:09] <tty1> whaley: yea my plugin by default keys in to pre/post-integration lifecycles
[21:27:01] <whaley> didn't foursquare have some mockdb that was mongo java driver compatible at some point?
[21:27:30] <tty1> whaley: there is a mock interface specific for mongodb actually
[21:27:36] <whaley> fongo. I can't remember, but there was a reason I didn't touch that - probalby not a good one :)
[21:27:39] <tty1> whaley: havent used it but if you need to mock up mongodb id look there
[21:28:46] <whaley> "jmockmongo is a terribly incomplete tool to help with unit testing Java-based MongoDB applications. It works (to the degree that it works)" <--- +1 for self-deprecating honesty, at least
[21:29:06] <tty1> whaley: hahaha yea that is golden :)
[21:40:53] <sjmikem> Trying to use the built-in HTTP REST interface. When I visit http://mysite:20817/mydb/mycollection/ I get an HTTP authentication dialog as expected. However, the username and password I normally use to access mycollection is not working
[23:03:57] <akoustik> i'm updating a subset of a collection, and i want to run a callback for each updated element. i'm not quite able to find the right magic google phrase for whatever reason.
[23:04:45] <akoustik> is there a "best" way to do this?
[23:09:10] <akoustik> i think i'm looking for a "findAndModify" that works on multiple documents.
[23:21:14] <akoustik> and i guess there have been a lot of people looking for something like that... non-existent, huh? oh well.
[23:21:51] <cheeser> there are no multiple document transactions, no.
[23:34:21] <kaliya> hi, I have a --dbpath /var/lib/mongodb and this partition is full. I don't care of data, I can just trash them. Is it `stopping mongo; rm /var/lib/mongodb/db.0 db.1 db.2...etc ; start mongo` ok?