PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 17th of March, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:36:51] <morenoh149> anyway to ensure uniqeness when inserting docs?
[02:37:19] <cheeser> an index
[02:48:24] <morenoh149> what's wrong with the way I'm making my index? https://gist.github.com/morenoh149/cd2f44954ced3c0ea4f8#file-client-js-L58
[02:50:07] <cheeser> in the shell what does "db.rooms.getIndexes()" show?
[02:50:59] <joannac> morenoh149: you didn't specify ascending / descending
[02:51:21] <joannac> read the docs and look at the examples http://docs.mongodb.org/manual/reference/method/db.collection.createIndex/#examples
[02:51:51] <morenoh149> `collection.createIndex('a', {w:1}, function(err, indexName) {` is what I was following http://mongodb.github.io/node-mongodb-native/1.4/api-generated/collection.html?highlight=index
[02:52:02] <morenoh149> I tried {slug:1} and had the same error as now
[02:53:02] <morenoh149> cheeser: returns empty array `[ ]`
[02:53:16] <morenoh149> which makes sense since my call to ensureIndex is failing
[02:55:03] <morenoh149> ah scratch that I only see the defautl index on the rooms collection
[02:55:55] <morenoh149> cheeser: scratch it further. I have two indexes there. Added to gist
[02:56:19] <morenoh149> wait no it's one index. the default one
[02:57:23] <morenoh149> joannac: `fieldOrSpec (object) – fieldOrSpec that defines the index.` for the first arg
[02:58:05] <joannac> I have no idea... i don't know what the actual error is
[02:58:30] <joannac> ifx your code to show what the error from the server is
[02:58:45] <morenoh149> see `errorLog.sh`
[02:59:00] <morenoh149> L48
[02:59:06] <joannac> all that says is "failed to create index"
[02:59:12] <joannac> which is your code
[02:59:18] <joannac> where's the message from the server?
[03:00:01] <morenoh149> ... but that is the log from my server
[03:01:23] <joannac> update your code
[03:01:40] <joannac> fix the code, update the gist, update with the new error
[03:02:33] <morenoh149> ah I see. it's correct syntax but the db already had dups. Thus the error.
[03:02:52] <morenoh149> joannac: that's what I do all day baby 🌈
[03:02:53] <joannac> right. how come you're not logging that message?
[08:04:13] <pluffsy> Hi! Are mongodb via driver questions OT here? (i.e using casbah)
[08:30:35] <Folkol_> No, people often ask questions about their drivers. And I have not seen anyone raging about it.
[09:19:36] <pluffsy> Thanks Folkol
[09:44:53] <pamp> hi
[09:46:15] <pamp> I make a mongodump from mongo 2.6 collections, and then make a mongorestore. worked without problems, however does not show the collections that migrated
[09:47:11] <pamp> but in db.stats () appears the two collections that migrated, and the space they occupied. But if you show collections do not appear
[09:47:31] <pamp> I am migrating to version 3.0
[09:47:50] <pamp> has anyone had this problem
[09:56:21] <Folkol> Is it possible to perform an incremental map-reduce with mutable documents? In particular, is there some convenient way of removing the old contribution to the reduced value when I "re-reduce" a mutated document anew?
[10:07:20] <morenoh149> Folkol: ✈️ what?
[10:09:09] <Folkol> I have a collection with documents. I am doing a map/reduce over that collection, which takes quite a lot of time. I would like to do an incremental map/reduce instead, but I do not have a good idea of how to separate "already reduced documents" from "new ones" for each incremental step - since the documents can change.
[10:09:41] <Folkol> I can store some "latest modified" in the documents and use that, but then the same document will contribute more than one time to the reduced value.
[10:10:59] <Folkol> I could always store the old version of the documents in some special collection when I update them to handle this, but I do not know if this is a good strategy.
[10:34:13] <srimon> Hi
[10:34:22] <srimon> I need small help
[10:35:23] <srimon> In mongo document, I need to check the email id, if it exists i need to update the record, else need to create the record
[10:35:31] <srimon> how to do it
[10:35:42] <srimon> please can any one help me
[10:39:30] <crowther> Afternoon All,
[10:41:26] <crowther> Is there anyway of searching through an entire document and bringing back all objects that match a specific word string?
[10:51:11] <pamp> hey´
[10:51:37] <pamp> anyone have problems connecting mongodb 3.0 instance to robomongo???
[10:54:16] <dimon222_> robomongo wasn't patched for mongodb 3.0
[10:54:25] <dimon222_> so as the result its not supposed to work
[10:54:54] <dimon222_> use mongochef or something
[10:55:19] <benji__> Hey all, I'm really struggling to get the aggregation framework to pull out required records, would anyone have time to give me a hand?
[10:56:01] <pamp> ok thanks
[10:59:32] <benji__> For example, I"m trying to get a group by-like query to work
[11:09:56] <kinlo> what exactly happens if you supply a replicaset in your connection string? Imagine I have a 3 node replica cluster, why do I need to supply the replicaset parameter in the connection string?
[11:15:15] <crowther> Can i search for a text string throughout a large document?
[11:29:41] <phutchins> When you have an app connecting to mongodb via a mongo config server, is the app connecting to that config server or is the config server simply telling the app which replica to connect to?
[11:57:59] <pamp> hey
[11:58:15] <pamp> in mongochef its not possible use the explain() ?
[12:00:48] <Tinu> Hi there
[12:00:56] <Tinu> I have urgent problem
[12:01:02] <Tinu> I hope you can help me
[12:01:13] <Tinu> I have DB on three shards
[12:01:21] <Tinu> I'm running out of space
[12:01:24] <Tinu> I've added next shard
[12:01:38] <Tinu> but mongo did not start balancing to new shard
[12:02:09] <Tinu> I think I missed part with adding new shard to my db or even collection
[12:05:21] <Tinu> db.printShardingStatus() returens:
[12:05:28] <Tinu> shards: { "_id" : "shard0000", "host" : "server1:port" } { "_id" : "shard0001", "host" : "server2:port" } { "_id" : "shard0002", "host" : "server3:port" } { "_id" : "shard0003", "host" : "server4:port" }
[12:05:31] <Tinu> so it's ok
[12:06:04] <Tinu> but my DB:
[12:06:07] <Tinu> DB.collection shard key: { "_id" : 1 } chunks: shard0001 7324 shard0002 7324 shard0000 7326
[12:06:21] <Tinu> no chunks on new shard 0003
[12:06:24] <Tinu> balancer is on
[12:06:41] <Tinu> any ideas?
[12:11:02] <fl0w> pamp: Late response but I just now downloaded both mongodb 3.0 and robomongo and got it running w/o issues ..
[12:11:05] <fl0w> (on Mac OS X)
[12:12:24] <pamp> flow: I'll try again
[12:12:29] <pamp> thanks
[12:13:07] <Tinu> any change for help with my unsharded collection ?
[12:13:21] <KekSi> i just wasted about 2 hours trying to find out why i couldn't mount my local data directory into a dockerized mongo - turns out because i'm using osx it doesn't support the FS handed through boot2docker :F
[12:13:22] <Tinu> Or rather sharded on three shards instead of all four
[12:15:30] <KekSi> Tinu: sh.status() ?
[12:16:24] <Tinu> shards: { "_id" : "shard0000", "host" : "server1:port" } { "_id" : "shard0001", "host" : "server2:port" } { "_id" : "shard0002", "host" : "server3:port" } { "_id" : "shard0003", "host" : "server4:port" }
[12:16:31] <Tinu> so all four are available
[12:16:34] <Tinu> but my db:
[12:16:48] <Tinu> DB.collection shard key: { "_id" : 1 } chunks: shard0001 7324 shard0002 7324 shard0000 7326
[12:16:52] <Tinu> is only on three shards
[12:17:36] <KekSi> and you're sure the balancer is running?
[12:18:12] <Tinu> mongos> sh.getBalancerState() true
[12:18:42] <KekSi> how long has it been since you added the shard?
[12:19:04] <Tinu> hour or two
[12:19:53] <Tinu> on this new shard:
[12:19:56] <Tinu> > show dbs admin (empty) local 0.078GB
[12:20:12] <Tinu> so it didn't even start to get my sharded DB
[12:21:21] <Tinu> in logs on new shard there is:
[12:21:22] <Tinu> remote client 10.92.171.18:40848 initialized this host (server4:port) as shard shard0003
[12:21:51] <Tinu> i thought I only need to add shard and mongo will take care of rest
[12:22:06] <Tinu> but maybe I missed something
[12:22:39] <KekSi> hm.. have you tried turning the balancer off and on again? seems odd to me
[12:23:09] <Tinu> Can I somehow check sharding status but for database? not for entire system?
[12:23:23] <Tinu> KakSi no, but I will try
[12:23:42] <fl0w> Does anyone have a good tutorial style resource to get mongo up and running with replication and/or sharding? Something that’s a bit more introductory compared to the official docs preferably.
[12:24:58] <cheeser> does "use mms" count?
[12:27:16] <KekSi> i suspect not since some people are unwilling to pay/are just getting started and aren't sure it's what they want
[12:27:44] <cheeser> up to 8 machines is free.
[12:27:55] <cheeser> i think it's 8...
[12:28:25] <cheeser> http://docs.mongodb.org/manual/administration/sharded-clusters/
[12:29:08] <fl0w> cheeser: Oh, that’s was for me?
[12:29:44] <cheeser> yes
[12:31:50] <fl0w> Well, I’m willing to pay - that’s not the issue. I’m guessing I’ll learn more by “doing it myself”. Not that I think that my own setup would perform better but I just want to setup an replicated/sharded environment to get comfortable within it.
[12:32:10] <cheeser> i'm all for learning, for sure. that link will walk you through the cluster.
[12:33:01] <fl0w> Spun up a few linodes and figured I might as well try to break a sharded/replicated mongo. Aye, I’m on it as I type! :)
[12:33:03] <Tinu> KekSi I've found something
[12:33:13] <Tinu> when I done this: sh.stopBalancer()
[12:34:29] <Tinu> Waiting for active host old_dev_server:port to recognize new settings... (ping : Tue May 01 2012 06:28:01 GMT-0700 (PDT))
[12:36:39] <pamp> fl0w: i've installed te last version of robomongo, I can connect to the instance, but I cant see my collections
[12:37:24] <fl0w> pamp: Sorry, I can’t help you. I’m a noob and just tried robo b/c I saw your question about it.
[12:38:01] <fl0w> What OS are you on? I’m running on OSX and had no issues viewing/adding/querying through robo.
[12:38:36] <pamp> Windows
[12:38:55] <pamp> with the last versions of mongo i had no issues too
[12:38:56] <benji__> How do you use $size $gt within a $match aggregate function?
[12:38:57] <Tinu> this is what I'm getting now:
[12:38:58] <Tinu> mongos> sh.startBalancer() assert.soon failed: function (){ var lock = db.getSisterDB( "config" ).locks.findOne({ _id : lockId }) if( state == false ) return ! lock || lock.state == 0 if( state == true ) return lock && lock.state == 2 if( state == undefined ) return (beginTS == undefined && lock) || (beginTS != undefined && ( !lock || lock.ts + "" != beginTS + "" )
[12:39:24] <benji__> I have { $match : { album_Count : { $size : 12 } } } but can't seem to get a $gt working
[12:39:33] <benji__> (album_count is an array)
[12:39:44] <srimon> Hi all
[12:40:43] <srimon> I need to check the email exists or not. if exits need to update or insert
[12:40:53] <srimon> can any one help on this
[12:41:15] <cheeser> use an upsert
[12:44:46] <srimon> document structure is: {_id: 5, email:"sri@gmail.com", history:[{"id":5, email:"sri@gm.com"}, {"id":6,"email":"test@gmail.com"}], In history if id = 5 it needs to update email, if i send id = 7 need to insert
[12:45:08] <srimon> @cheeser: document structure is: {_id: 5, email:"sri@gmail.com", history:[{"id":5, email:"sri@gm.com"}, {"id":6,"email":"test@gmail.com"}], In history if id = 5 it needs to update email, if i send id = 7 need to insert
[12:50:43] <srimon> please any one can help me
[12:50:45] <srimon> document structure is: {_id: 5, email:"sri@gmail.com", history:[{"id":5, email:"sri@gm.com"}, {"id":6,"email":"test@gmail.com"}], In history if id = 5 it needs to update email, if i send id = 7 need to insert
[12:50:59] <srimon> document structure is: {_id: 35, email:"sri@gmail.com", history:[{"id":5, email:"sri@gm.com"}, {"id":6,"email":"test@gmail.com"}], In history if id = 5 it needs to update email, if i send id = 7 need to insert
[12:51:07] <KekSi> stop spamming please
[12:51:28] <srimon> Can you please help me
[13:07:06] <pluffsy> Are there any way to use json as input for extended data types (i.e. dates) with casbah. By default it does not seem to support dates in the strict format i.e “mydate”: {“$date”, “<date>”}. I get “bad key” on $date. As I get json as input I need to be able to parse strict dates somehow, right now I’m looking at making a regex, which feels hacky and nonoptimal. Any ideas on this?
[13:07:42] <Tinu> KekSi do you have any ideas? Why my balancer is not sending any data to my new shard?
[13:07:54] <teds5443> hi all, I'm having an issue where mongodb is segfaulting everytime I submit a map/reduce query. This only happens on fedora though, on ubuntu it works fine (same server version). I'm not sure if I should file a bug or not.
[13:08:26] <Tinu> its right now: mongos> sh.isBalancerRunning() false mongos> sh.getBalancerState() true
[13:10:53] <benji__> Anyone know how to only $push distinct array elements? Or maybe count distinct array elements?
[13:13:20] <kinlo> I've enabled auth and keyfile, I created my first user on my mongodb, but still I'm able to connect to the database without any authentication information. How do I prevent connections to the server witouth a valid username/pass combination?
[13:17:36] <Tinu> KekSi I know what was the problem
[13:17:46] <Tinu> there was balancer activeWindwo set up
[13:17:57] <Tinu> so that why it didn't start
[13:17:59] <cheeser> kinlo: did you start the server with auth enabled?
[13:19:00] <kinlo> cheeser: auth and keyfile are present in the config file, so yes
[13:19:30] <cheeser> see if this applies: http://docs.mongodb.org/manual/core/authentication/#localhost-exception
[13:19:36] <cheeser> other than that, i dunno
[13:19:41] <kinlo> it does not, I connect on ip, not localhost
[13:19:58] <kinlo> note that authorisation works - I just want to prevent any connection at all
[13:20:47] <cheeser> so you can connect but you have to auth, yes?
[13:21:25] <kinlo> not really, I can still see information like the repset name and wether the database is primary or not - seems a bit too much information, I'd prefer not to have any information at all
[13:22:21] <cheeser> sounds like a roles thing maybe: http://docs.mongodb.org/manual/core/authorization/
[13:25:02] <kinlo> cheeser: as I read the documentation correctly, it seems you can only grant privileges, not revoke them so doubt it is a role thing. I cannot login on the database with any user/password combination unless it is correct, that's good. But when I just do not supply a username or password, I still can connect, which seems weird, coz I enabled auth
[13:25:28] <cheeser> that's weird. file a ticket?
[13:25:36] <cheeser> a support ticket if nothing else.
[13:25:59] <kinlo> I'll look into that, thanks
[13:27:59] <endlesszero> hello guys
[13:31:42] <_shaps_> Hi, I was trying to use packager.py from the mongo repo, but i get 403 when it tries to download the package
[13:31:59] <_shaps_> known bug?
[13:36:36] <endlesszero> 403: Forbidden, probably you have to take it from somewhere else, but it could be a bug
[13:45:27] <_shaps_> endlesszero: yeah sounds like I have to download it from somewhere else
[13:46:11] <_shaps_> i've tried changing downloads to fastdl
[13:46:15] <_shaps_> but same result
[13:46:49] <StephenLynx> what OS are you using?
[13:47:52] <_shaps_> tried debian7/centos6
[13:49:18] <elux> what is the --storageEngine arg to pass to turn on wiredtiger ..?
[13:51:32] <elux> figured it out
[13:51:39] <elux> would be nice to see it as an option with running mongod --help
[13:51:45] <elux> its, "wiredTiger" for anyone else
[13:51:46] <_shaps_> tried ubuntu as well actually
[13:54:27] <StephenLynx> and you followed the instructions on mongo website?
[13:54:49] <StephenLynx> added the repository information and such?
[14:29:44] <tsyd> How do I import a document with double numbers?
[14:29:54] <tsyd> Failed: error getting extended BSON for document #2: conversion of JSON type '767.0' unsupported
[14:31:42] <cheeser> pastebin your doc and how you're trying to import it.
[14:33:53] <tsyd> http://pastebin.com/vyE9Psyd
[14:34:42] <tsyd> What's weird is that this other document gets inserted fine: http://pastebin.com/ZncY2mBm
[14:39:31] <cheeser> which doc errors out? how are you importing?
[14:51:43] <NoOutlet> It seems like he's getting the default case on line 100 of this: https://github.com/mongodb/mongo-tools/blob/master/common/bsonutil/converter.go
[14:52:25] <NoOutlet> Though I don't know why the 767.0 would not match float.
[15:11:53] <soosfarm> hey, I'm running mongodb 2.6.8 with auth=true, I have added a siteAdmin user with these roles in the admin database 'userAdminAnyDatabase','readWriteAnyDatabase','dbAdmin','root'
[15:12:03] <soosfarm> but I can't connect to any other database with that user
[15:12:16] <tsyd> cheeser: I'm importing using `mongoimport --db mydb --collection episodes --drop --jsonArray --file episodes.json`
[16:19:27] <benji__> HI all, can anyone comment on how I should be compare two string fields in a document? Currently working with { $match: { $eq: [ '$song_hotttnesss', '$artist_hotttnesss' ] } } which does not work
[16:35:34] <MatheusOl> Hi. When does fsyncLock actually block reads, and why?
[16:37:55] <pasichnyk> I added a couple priority:0,votes:0 nodes to my replicaset to speed up our regional reporting instances (one per region). I set my connection string to readPreference=nearest, but it doesn't look like any queries are making it to these boxes. Any step i'm missing here? Or can priority0 boxes wihtout any votes not get read queries?
[16:40:39] <pasichnyk> fyi, i set them to votes:0, as there are more offsite reporting replicas (5) than my onsite copies (3), and i didn't want a network split to take down my onsite cluster.
[16:49:08] <blaubarschbube> hi. i want to run mongodump for backup purposes. is there a way to only dump newest data instead of a whole collection?
[16:50:10] <pasichnyk> blaubarschbube, if you keep a timestamp of when you dumped last, you can limit the dump with query parameter (-q "{your query here}")
[16:51:29] <blaubarschbube> pasichnyk, thanks, hat should do the job. sry for asking stupid questions, i am just responsible for our backup, not for mongodb
[16:54:23] <pasichnyk> np, good luck. If you are on LVM volumes, you should also look into doing LVM snapshot backups instead of/in addition to mongodump. Especially if your database is huge.
[16:55:46] <pasichnyk> blaubarschbube, http://docs.mongodb.org/manual/tutorial/backup-with-filesystem-snapshots/ also look into MMS Backup as an option.
[16:58:57] <blaubarschbube> great, thx
[17:20:02] <delinquentme> does nosql for some reason not support 'hierarchy' within data?
[17:20:13] <delinquentme> Im not wildly familiar with it, but to my understanding it handles it just fine...
[17:21:31] <cheeser> "nosql" is a meaningless marketing term used to refer to dozens of unrelated products and technologies who only fleeting resemblance is that most aren't traditional relational databases.
[17:26:18] <pasichnyk> delinquentme - so... mongodb is "nosql" and you can build heirarchy in data with it if you want. That what you're looking for?
[17:26:49] <pasichnyk> you would however likely build it at a document level, or heirachary between documents that you'd have to resolve yourself (no joins)
[17:27:11] <cheeser> those would be relations and not hierarchy, though.
[17:27:17] <cheeser> (splitting hairs i know...)
[17:28:04] <pasichnyk> @cheeser sure. delinquentme - give me an example of what you're trying to do
[17:28:04] <delinquentme> Ohhh perhaps thats what this guy meant?
[17:28:20] <delinquentme> that joins need to be performed by the end user?
[17:29:07] <pasichnyk> delinquentme or you build your schema in such a way that you alreayd have all the data you need in a single query, and you don't have to "join" - think denormalized table.
[17:29:25] <delinquentme> pasichnyk, im not doing anything right now =] headed into a design meeting tomorrow and one of our guys said
[17:29:55] <delinquentme> "NoSQL, it is a viable option, given that the model data does not need to be hierarchical... The rest of the business data does need to be hierarchical so we cannot do away with Postgres. "
[17:30:35] <pasichnyk> ok, if your guy thinks its a good option based on the data model he has designed, then probably it. :)
[17:30:45] <pasichnyk> s/it/is
[17:31:05] <delinquentme> except that part about 'hierarchical'
[17:31:29] <delinquentme> it suggests that his understanding of nosql is that it does not support hiearachy within data
[17:31:34] <pasichnyk> well what does he, and what do you, define as "hierarchical" when it comes to data
[17:31:40] <delinquentme> which Im trying to understand where that might come from
[17:32:06] <delinquentme> and hes the boss ... so like IDK if I want to be like " what do you mean by hierarcy "
[17:32:12] <delinquentme> questionmarks.
[17:32:42] <pasichnyk> asking for clarification is not a bad thing. its all how you ask...
[17:34:03] <pasichnyk> there are ways to implement "hierarchy" in mongo document structures, i.e., http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/#add-intra-document-hierarchy Maybe he's talking about something totally different though. Hard to know without context.
[17:36:40] <pasichnyk> @cheeser this is more what i was talking about building a hierarchy by referencing other documents: http://docs.mongodb.org/ecosystem/use-cases/category-hierarchy/
[17:37:14] <cheeser> got it.
[17:37:24] <pasichnyk> self reference relation. :)
[17:37:32] <pasichnyk> its muddy water. haha
[17:39:40] <pasichnyk> @cheeser you see my question earlier about non-voting replicaset members and not seeming to take any read queries even though the readPreference is nearest, and there isn't another server for 1000 miles?
[17:40:35] <pasichnyk> curious if you have any thoughts, seems like these guys should be serving read requests, but doesnt' appear so (turned on profiling for all queries, and don't see anything besides my localhost monitoring queries)
[17:52:39] <pasichnyk> ah ha, i think there is another connection string that was hardcoded somewhere that didn't have the nearest flag. Ripping that out and hopefull it works. :)
[17:54:49] <cheeser> 4
[18:36:18] <culthero> Hello, would there be any reason in general as to why an aggregate pipeline query would run very very slow if the number of results was less then the limit? I have queries that return 102 results in a few millisections, and queries that have 97 results not return results for 30-40 seconds.
[18:36:57] <culthero> The query WAS working just fine, the server rebooted and now things aren't working, and I have no clue where to start looking.
[18:37:44] <GothAlice> culthero: There are a number of potential issues. Ref: http://docs.mongodb.org/manual/core/aggregation-pipeline-optimization/ for a list of some and http://pauldone.blogspot.ca/2014/03/mongoparallelaggregation.html for one approach to improve performance in the general case.
[18:38:26] <GothAlice> However, often a sudden shift in performance can be attributed to your query no longer using a, or the correct index. Not sure how to hint indexes in an aggregate, though.
[18:39:48] <GothAlice> Ah, on the last, ref: http://grokbase.com/t/gg/mongodb-user/131rg2zkbt/mongo-aggregate-query-doesnt-seem-to-be-using-the-index
[18:40:04] <culthero> I would imagine some kind of index went away, the collection I am aggregating has a TTL index, the script crashed, all the old records expired now only large results seem to return anything
[18:40:53] <GothAlice> Have you re-run your aggregate query's initial $match/$limit/$skip/$sort as a standard query with "explain"?
[18:41:10] <culthero> I am doing that now, I have to rewrite the query from mongoose debug to something CLI friendly. Standby
[18:41:11] <culthero> :)
[18:42:14] <NoOutlet> Indeed.
[18:42:42] <GothAlice> As meme-ish as "considered harmful" posts are these days, Mongoose deserves one. ¬_¬
[18:47:13] <culthero> yeah this is a project that has the benefit of being done like a year and a half ago, and just was abandoned
[18:48:06] <NoOutlet> Those are the best. I'm on something like that right now.
[18:50:55] <culthero> I don't even rememer mongo query syntax.. db.collection.aggregate ( [ { $obj }, {$obj } ] ) should work? Or do I need to wrap that in braces?
[18:51:00] <culthero> oh let me just pastebin it
[18:51:58] <culthero> http://pastebin.com/tj9LMshP
[18:52:08] <culthero> that gives me a syntax error, I don't see it
[18:53:41] <culthero> and here are my indexes; http://pastebin.com/tbZ9v0q5
[18:58:11] <NoOutlet> Well, your timestamps will need to be converted to something MongoDB can understand.
[18:59:50] <NoOutlet> And the explain() shortcut is only available on regular queries. Here's how to specify 'explain' on aggregate queries: http://docs.mongodb.org/manual/reference/method/db.collection.aggregate/#example-aggregate-method-explain-option
[19:01:13] <cheeser> yeah, it's an option
[19:01:56] <NoOutlet> Those are the syntax problems I see in your pipeline. There are also a couple things that are silly, but which might make sense when you are looking for more than one filter phrase or you are skipping some documents.
[19:03:03] <culthero> yeah I have removed the timestamps as that is a product of the script / mongoose and they should be irrelevant
[19:04:14] <NoOutlet> Basically, `phrases: { $in: ["anxiety"]}` is equivalent in returned results to `phrases: "anxiety"`.
[19:08:22] <culthero> this is mongo 2.6.8
[19:08:29] <culthero> ok, here is the explain..
[19:08:29] <culthero> http://pastebin.com/h7WK5wyv
[19:09:05] <culthero> that particular result has 69 results
[19:09:06] <culthero> hm
[19:09:26] <culthero> you know this is actually working fine on the server. I bet this is something else - like wrong in how it's being processed
[19:09:42] <culthero> Well, I mean working fine as in that particular query ran quickly
[19:12:27] <NoOutlet> With the time constraints, it would probably use the 'inserted_1' index.
[19:21:31] <cheeser> w00t
[19:21:52] <GothAlice> {content: [{id: ObjectId(…), content: [{content: 'foo'}, {content: 'bar'}]}]} — amazingly, maintainable. I need to $elemMatch the nested "id", then positionally update the more deeply nested content. >:D
[19:22:06] <GothAlice> cheeser: Mad science is never stopping to ask: what's the worst that could happen? ;)
[19:22:55] <cheeser> http://bit.ly/1Cq8s01
[19:25:35] <GothAlice> cheeser: http://cl.ly/image/3e3z2e1s3g080X3f262B sums up the rollercoaster of realizing the depth of the nesting. "Uh-oh." "Oh noes!" "I can make this work. >:3"
[19:26:27] <cheeser> i love that one so much. dude in in the middle is the Master of Schadenfreude.
[19:26:40] <GothAlice> He really is. XD
[19:28:00] <cheeser> for the interested, asya posted the first of a perf comparison with 2.6/3.0 http://www.mongodb.com/blog/post/performance-testing-mongodb-30-part-1-throughput-improvements-measured-ycsb
[19:29:41] <cheeser> "Based on these results, the optimal thread count for the application is somewhere between 16 and 64 threads, depending on whether we favor latency or throughput. At 64 threads, latency still looks quite good: the 99th percentile for reads is less than 1ms, and the 99th percentile for writes is less than 4ms. Meanwhile, throughput is over 130,000 ops/sec."
[19:29:44] <GothAlice> cheeser: mmapv1, mostly?
[19:29:51] <cheeser> wiredTiger
[19:30:07] <GothAlice> Hmm.
[19:30:43] <GothAlice> I brought it to just under 1,000 ops/sec and managed to kill the VM dead. :/
[19:30:48] <GothAlice> (Like, straight-up kernel panic.)
[19:31:13] <cheeser> you should ping asya. i'm sure she'd love to work through that. she's clearly not having that problem. :)
[19:31:22] <GothAlice> Very clearly. XD
[19:33:44] <GothAlice> Until it died, though, the workload I was loading it with was performing noticeably better. Then 1000x worse for a few seconds prior to the explosion. (On a 16 GiB RAM machine with a 300 MiB analytics dataset.)
[19:39:31] <GothAlice> cheeser: That post is also rather poignant in its omission of disk (which should benefit greatly from compression) and RAM utilization statistics at each of those threading levels. :/
[19:42:41] <cheeser> GothAlice: it *is* part 1 ;)
[19:43:34] <GothAlice> cheeser: http://cl.ly/Db6C ;)
[19:43:54] <cheeser> :D
[19:59:19] <fl0w> holy crap mms seems amazing!
[19:59:29] <fl0w> sorry ‘bout the profanity but it felt nessecary.
[20:01:02] <cheeser> profanity? where?
[20:01:09] <cheeser> and yes, mms is awesome. :)
[20:01:17] <GothAlice> It certainly is. :D
[20:02:34] <nickenchuggets> so, I'm kinda wondering if I'll ever be able to install a 64 bit version of Debian as a guest OS in virtualbox.
[20:02:42] <nickenchuggets> if there's... a way around my issue.
[20:02:51] <nickenchuggets> ... if anyone remembers from yesterday
[20:04:37] <fl0w> So, is it a bad idea to deploy a sharded cluster for gridfs usage only?
[20:05:36] <cheeser> fl0w: that's an ... interesting use of sharding. :)
[20:07:45] <fl0w> interesting as in stupid, or … plainly … interesting?
[20:08:00] <fl0w> cheeser: ^?
[20:10:07] <cheeser> just interesting
[20:14:25] <dbclk> hey guys
[20:15:30] <dbclk> if I have a mongo cluster and I want to add a node that's far out
[20:15:33] <dbclk> no on the same network
[20:15:39] <dbclk> not on the same network
[20:15:43] <dbclk> is that advise?
[20:16:04] <cheeser> that's what multi-datacenter clusters do
[20:21:04] <GothAlice> fl0w: I have 27 TiB of data in GridFS, in a 3x3 sharded replica set configuration. (Yes, my per-shard size exceeds RAM. It's a write-heavy dataset, so it's rarely a problem.)
[20:21:57] <cheeser> time add a 4 and watch the balancer go ballistic.
[20:22:09] <GothAlice> dbclk: Yes, often it's a very good idea. What you're looking for is datacenter-aware replication: http://docs.mongodb.org/manual/data-center-awareness/
[20:22:54] <GothAlice> cheeser: I'm dreading the day I need to do that. I'd probably run a task to semi-manually balance at a substantial rate limit.
[20:23:20] <cheeser> where's the fun in that?
[20:23:42] <GothAlice> … not having my poor gigabit network get hosed? :P
[20:25:21] <GothAlice> Actually, a large migration event like that would probably cripple my current unbelievably reliable MTBF; considering my current drives, less the latest HDD in each array, are all > 3 years old, with 24/7 uptime, I'd likely lose one or more drives mid-balance.
[20:29:37] <fl0w> GothAlice: Correct me if I’m wrong here please but as I understand it - I can shard and effectively scale my HDD horizontally?
[20:30:55] <fl0w> Using GridFS as my storage*
[20:34:19] <GothAlice> fl0w: Yes. The choice of sharding key allows you to have control over which shard your GridFS metadata and file chunks go to. Depending on your needs, you may wish to keep chunks relating to a single file together on the same shard, or spread them evenly amongst the shards.
[20:35:05] <GothAlice> (One could even intentionally make MongoDB _imbalanced_ in its sharding, i.e. if your shards don't all have the same amount of allocated disk space.)
[20:35:18] <fl0w> huh.
[20:36:57] <dbclk> so currently guys what I have is 2 mongo instances with 1 arbiter
[20:37:13] <GothAlice> (Or if one of your shards is in a different datacenter. You can effectively set up geographically distributed data this way.))
[20:37:18] <fl0w> GothAlice: And minimum amount of shards is 3, correct? To get the redundancy that is?
[20:37:19] <dbclk> one of those instances is one a server and the other along with the arbiter is on another server
[20:38:02] <GothAlice> fl0w: Sharding does not give you redundancy. Replication gives you redundancy. This is why I described my setup as a "3x3" sharded replica set. (That's 3 shards, each containing three nodes in a replica set.)
[20:38:03] <dbclk> what i'm seeing if I should shudown the server with the 1 mongo instance and the arbiter
[20:38:11] <dbclk> the other server elects itself as secondary
[20:38:12] <dbclk> why?
[20:38:16] <dbclk> i thought it would be primary
[20:38:42] <GothAlice> dbclk: Never have your arbiter on the same physical node as a real data-serving mongod. To do so makes having the arbiter pointless 50% of the time.
[20:39:16] <fl0w> GothAlice: Hm. But if I lose one shard, isn’t there data integrity? Like a RAID 2 setting?
[20:39:21] <GothAlice> (I.e. if the node containing both mongod and the arbiter goes down, the remaining replica stops accepting reservations and packs up shop. ;)
[20:39:33] <GothAlice> fl0w: That's replication.
[20:39:59] <fl0w> GothAlice: Oh. Well, back to the docs then.
[20:40:09] <GothAlice> fl0w: Think of sharding as "stripe" RAID, and replication as "mirroring" RAID. To achieve both performance _and_ reliability, use both. :)
[20:40:17] <GothAlice> (I.e. a RAID 10 setup.)
[20:40:47] <dbclk> but, if the other node elects itself as "secondary" would it cause any data serving issue?
[20:41:33] <GothAlice> dbclk: Elections do not happen to elect secondaries, only primaries.
[20:41:45] <dbclk> got it
[20:41:47] <dbclk> but
[20:42:06] <GothAlice> dbclk: And if the remaining nodes are unable to see >50% of the known nodes (i.e. if only one replica remains after the other replica and the arbiter go down), it goes read-only.
[20:42:06] <dbclk> but, a node specified as secondary would it prevent data from being provided from that node?
[20:42:24] <dbclk> I see
[20:42:25] <GothAlice> (Read-only. Writes will fail, but reads should still work.)
[20:43:11] <GothAlice> OTOH, with the arbiter on, say, the application node (not sharing space with a data node), the replica set can recover, the remaining secondary elects itself as primary with the help of the arbiter, and your application continues, blissfully unaware that hell just broke loose somewhere. ;)
[20:43:40] <GothAlice> (High-availability is cool, like bow-ties.)
[20:45:35] <GothAlice> dbclk: If you learn well from code, https://gist.github.com/amcgregor/c33da0d76350f7018875#file-cluster-sh-L96-L114 is a little script of mine to start up a 2x3 sharded replica set with authentication. The highlighted code spawns the pool of processes, then 120-124 configures sharding. (This script is for testing and demonstration purposes, thus all daemons on one host.)
[20:46:08] <GothAlice> This script could be useful to let you easily, quickly, and without permanent effect test out different sharding strategies to see how things get balanced.
[20:46:39] <dbclk> thanks GothAlice
[20:47:17] <GothAlice> (Just adjust the variables at the top to taste. It's safe to nuke the contents of the data directory between runs, it'll get recreated, just remember to issue a stop command first. ;)
[21:07:23] <NoOutlet> I have one of those too, Alice.
[21:07:51] <NoOutlet> https://github.com/NoOutlet/mongoShards/blob/master/init_sharded_env.sh
[21:11:35] <GothAlice> NoOutlet: I appreciate how you went to the effort of using `seq …` on the for loops, vs. my poor-man's version. XP
[21:13:53] <GothAlice> NoOutlet: Sad about the GPL, though. Rats, t'was totally going to steal some of that armouring code. ;)
[21:14:46] <NoOutlet> The license? I can change it. What's a good one to use?
[21:15:11] <GothAlice> I use MIT to allow for commercial re-use while still requiring attribution. (I'm pretty liberal about that, though.)
[21:15:14] <NoOutlet> I never know what to use.
[21:15:27] <GothAlice> (Also, it's not a viral license.)
[21:16:04] <GothAlice> You reminded me to add a proper notice to my gist. Thanks! :D
[21:19:20] <NoOutlet> Okay, it's MIT now.
[21:20:22] <NoOutlet> When you say "armouring code", do you mean the port checking?
[21:21:05] <GothAlice> Aye. I have some similar code in my SSH tunneling code to check a single port on the remote side, but you're conveniently checking everything at once.
[21:21:39] <NoOutlet> :)
[21:22:55] <GothAlice> NoOutlet: https://gist.github.com/amcgregor/3983451 is my 'tunnel' script, which I think you might enjoy. (This one's a fair bit older than the sharding one. ;)
[21:24:40] <GothAlice> (Executed as: "tunnel <local-port>" or "tunnel <local-host> <local-port>"
[21:35:15] <NoOutlet> Nice Alice. I don't know much about tunneling but I like the practice of echoing the actual command that is executing.
[21:45:23] <epx998> in mongo 2.4, can i have a replica set and a slave? or is it one or the other? or one in the same?
[21:45:26] <GothAlice> NoOutlet: I mainly install this on my client's machines, so I can tell them to run support.script (shell script, effectively, but a launchable icon) to tunnel VNC/5900 to one of my servers for support, and they just need to read out the number presented.
[21:45:34] <GothAlice> NoOutlet: Or, likewise, I use this to tunnel VNC from my machine to a public server (w/ HTML5 viewer) for webinars instead of the various cloud offerings. (Free is good… not installing extra software is even better. ;)
[21:45:51] <GothAlice> epx998: One or the other, and "master/slave" configurations are now fully deprecated. As is 2.4. ;)
[21:46:00] <epx998> yeah
[21:46:06] <epx998> these are 2.... uhm
[21:46:40] <epx998> 2.2.2 - well this node is anyway
[21:47:09] <epx998> i set uhm a 2.4 master with a replica, then made a 2.2.2 slave to the 2.4 master to see what the deal was.
[21:47:28] <GothAlice> :|
[21:47:29] <GothAlice> wat
[21:47:45] <GothAlice> … and you did this… recently?
[21:48:21] <epx998> i was given a very not fun task, these tests are just one some vm's i threw up.
[21:48:43] <NoOutlet> I threw up too.
[21:48:43] <epx998> we bought some company that runs 2.2 or 2.4 master/replica and i need to migrate it to a different datacenter
[21:53:00] <epx998> guess what they want is for me to add in a replica to the master at a new dc, have it sync then, shutdown the service, reconfgi it has a master and bring it up
[21:54:44] <GothAlice> epx998: https://jira.mongodb.org/browse/SERVER-17278?filter=17502 < these are the improvements, large-scale changes, bug and security fixes released within the 2.6 lifespan. You're 2054 issues behind… being behind. There are an additional 1675 resolved issues leading up to the current 3.0.0 release, which so far, I'm loving. (Ref: https://jira.mongodb.org/browse/SERVER-17444?filter=17503)
[21:55:10] <fl0w> I’m having a real hard time understanding what WiredTiger actually is, what problems it solves, and how it’s different from the standard engine - any recommendations on resources aimed towards rookies?
[21:55:47] <GothAlice> Oop, forgot to filter to resolved status. D'oh. *updates*
[21:56:57] <GothAlice> fl0w: It uses a combination of techniques, taking more responsibility for management away from the kernel so as to apply more intelligent handling to IO, for example, and optionally using a more modern B-Tree standard that is more efficient in order to give a wide variety of advantages.
[21:58:54] <fl0w> GothAlice: Ok. But is it supposed to be a replacement or a suppliment? If the latter is the case, under what circumstance is WiredTiger prefered?
[21:59:31] <NoOutlet> It's an option for storage engine.
[22:00:02] <GothAlice> fl0w: Lock-free algorithms are awesome. (Conditional upserts in RAM.) It better supports SSD backing stores by supporting no-overwrite storage, etc. For details, see: http://source.wiredtiger.com/2.5.1/index.html (notably the Architecture section)
[22:01:29] <GothAlice> As a MongoDB back-end it might not be able to do some of these things… yet. (This is documentation for the underlying engine that was bought by 10gen… I know not of its relevance to the WiredTiger implementation in the MongoDB code.) See also http://docs.mongodb.org/v3.0/release-notes/3.0/#major-changes
[22:04:40] <NoOutlet> The idea is that there can be other storage engines developed by storage engine specialists and now those storage engines can be used with MongoDB. WiredTiger is the first official alternative to the mmapv1.
[22:05:30] <fl0w> Ah, wiredtiger.com had what I wanted :) Though a bit to technical at this stage for me. Either way, the one thing I’m really lacking is transactions - but I guess that it defies mongodb conceptually?
[22:06:44] <GothAlice> fl0w: WiredTiger supports compression, both fast (like, ludicrous-speed fast) but good-enough compression, and zlib compression. It supports finer-grained locking, which greatly reduces read and write latencies, ref: http://www.mongodb.com/blog/post/performance-testing-mongodb-30-part-1-throughput-improvements-measured-ycsb (it's very quick) For more polished overview: http://www.mongodb.com/mongodb-3.0#overview
[22:07:18] <fl0w> yea I’m reading that blog post as we speak!
[22:08:11] <fl0w> the throughput really does go … well, through the roof.
[22:08:27] <GothAlice> It gets even better.
[22:08:28] <GothAlice> http://www.tokutek.com/2014/12/yes-virginia-going-tokumx-storage-engine-mongodb-2-8/
[22:08:38] <GothAlice> (Anywhere you see 2.8, read 3.0.)
[22:10:02] <GothAlice> Buffering… fractal tree… lazy queue… things… of pure magic. (You'll have to look through their blog for mmapv1—MongoDB 2.6—benchmarks to compare against the WiredTiger benchmark previously linked.)
[22:14:04] <fl0w> sweet ..
[22:14:27] <GothAlice> So, there's Relatively Soon Now™ going to be an ecosystem of pluggable engine back-ends. :3
[22:15:30] <fl0w> aye .. so, I guess mongo is a good choice for me to invest some “pet project”-time to learn then :)
[22:16:27] <NoOutlet> There are some online courses available that just started today.
[22:16:51] <NoOutlet> https://university.mongodb.com/
[22:16:56] <GothAlice> ^ Good stuff.
[22:17:24] <NoOutlet> I highly recommend the Python one.
[22:17:33] <fl0w> yea, great, now I’ll never get to sleep - and I have an early meating. *brings the coffee*
[22:18:01] <GothAlice> ^_^ 1mg/kg/hr Caffeine IV, STAT!
[22:18:46] <fl0w> also, amphetamine.
[22:19:03] <fl0w> … if I were cool enough that is (which I’m not)
[22:20:00] <Freman> http://pastebin.com/KhJLqHFw uh... so I gather the journel is corrupt?
[22:20:05] <Freman> any way around that?
[22:21:53] <GothAlice> Bus error? Sounds like a failing disk, not just a bad journal.
[22:22:31] <Freman> machine was rudely powered off
[22:23:24] <Freman> it is on btrfs
[22:23:31] <GothAlice> Ah.
[22:23:34] <GothAlice> Don't do that.
[22:23:54] <Freman> oh... that was only in my head :(
[22:24:16] <Freman> don't do what... btrfs or power off?
[22:24:35] <GothAlice> btrfs, at least, for your MongoDB data.
[22:26:14] <Freman> I didn't do it, as usual, I just have to fix it
[22:26:17] <GothAlice> Hmm, http://www.percona.com/blog/2012/05/22/btrfs-probably-not-ready-yet/ isn't the specific one I'm looking for.
[22:26:28] <Freman> I'd love the specific one :D
[22:26:39] <Freman> then I can print it out and use it to beat the guy responsible
[22:27:00] <GothAlice> http://qnalist.com/questions/5276891/mongod-locks-up-on-btrfs problem report involving lockups… Exocortex is still digging. (It's been a while since btrfs has come up…)
[22:29:53] <GothAlice> Freman: https://btrfs.wiki.kernel.org/index.php/Gotchas < print this.
[22:30:07] <GothAlice> It's not exactly the density of a phone-book, but it's the fourth major point.
[22:31:09] <GothAlice> Freman: However, your journal is currently borked. You may be able to rename the on-disk journal files out of the way and attempt to continue with the data as-is. This may require running mongod once with --repair to rebuild the collections.
[22:31:21] <Freman> I'm wondering if btrfs+mongo is contributing to other issues we have (especially speed)
[22:31:24] <GothAlice> (Re-naming it out after shutting down will recreate on startup.)
[22:31:40] <GothAlice> Freman: It certainly would. It'd also heavily contribute to disk wear, esp. on SSDs.
[22:32:45] <Freman> so, mv journel journel.borked, /usr/local/bin/mongod --config /var/db/mongodb.conf --repair
[22:32:58] <GothAlice> And attempt to go from there, aye.
[22:34:20] <GothAlice> To rebuild, you'll want 1.2x your dataSize in free disk space, BTW. (It literally rebuilds into a copy, validates, then moves the copy over the original, so much space is needed.)
[22:34:47] <Freman> yeh that didn't go so well
[22:34:48] <Freman> 2015-03-18T08:34:18.162+1000 I - [initandlisten] Fatal assertion 18506 OutOfDiskSpace Cannot repair database logsearch having size: 1405865689088 (bytes) because free disk space is: 108800696320 (bytes)
[22:34:49] <Freman> lol
[22:34:56] <GothAlice> Hhe.
[22:35:00] <GothAlice> *Bam* disk space.
[22:35:05] <GothAlice> Was this a member of a replica set?
[22:35:14] <cheeser> the good news is you can copy that over to somewhere with space and try again
[22:35:20] <Freman> naaaah, that'd be fun :D
[22:35:23] <fl0w> im out - started the university .. so i guess beer boobs and sleepless nights are in order, night ya’ll and thanks for all the help and suggestions!
[22:35:42] <GothAlice> fl0w: Have a great one. It never hurts to help.
[22:35:43] <cheeser> what's a beer boob?
[22:35:53] <fl0w> .. beer, boobs*
[22:36:03] <GothAlice> Punctuation, for the win.
[22:36:08] <Freman> personally, I'm not opposed to accidentally erasing the db (is just logs) but someone will get their knickers in a knot
[22:36:14] <cheeser> just ask my Uncle Jack
[22:37:33] <GothAlice> Freman: Heh, we only keep the last week's of logs in MongoDB, and cherry-pick interesting records for archival elsewhere. Nuking logs is less of a problem for us.
[22:37:48] <GothAlice> (Also, capped collections are awesome.)
[22:38:34] <Freman> if your crap broke 501 executions ago then it's your fault for not looking sooner
[22:39:32] <GothAlice> Might work.
[22:40:51] <Freman> nope
[22:41:25] <GothAlice> Freman: ^_^ It's really interesting to see other strategies. In our nearly two-year-old DB at work our op counters read only 8 deletions not related to TTL indexes. We… keep interesting data forever, and let the uninteresting stuff die of natural causes. And every in-app delete button is secured with "is_alice" ACLs. ¬_¬
[22:42:17] <NoOutlet> Hah
[22:47:37] <GothAlice> Freman: Also, apologies for being the bearer of bad news on the btrfs thing. :/ It's a neat project, but I've seen it bite others in the past.
[22:53:36] <GothAlice> https://github.com/zfsonlinux/zfs/issues/326 is a bummer for the Linux implementation, though.
[22:54:44] <Freman> so this is the part of mongo I've never done before, adding a user
[22:54:58] <GothAlice> Freman: http://docs.mongodb.org/manual/tutorial/add-user-to-database/
[22:55:09] <GothAlice> Have you already enabled authentication?
[22:55:24] <Freman> yeh was part of the config
[22:55:33] <GothAlice> Added your _first_ user?
[22:55:55] <Freman> yeh that's all scripted
[22:56:03] <Freman> just need to work out the regular users
[22:56:20] <Freman> db.createUser({user:"logsearch", pwd:"028jgq9nv309v", roles:[{role: "readWrite", db: "logsearch"}});
[22:56:28] <Freman> sound right?
[22:57:35] <NoOutlet> Missing an end bracket on the 'roles' array.
[22:57:51] <Freman> yeh I added that in the actual command (debug via irc :P)
[22:57:53] <GothAlice> Also do you want your user to be able to manage indexes?
[22:58:01] <Freman> yes
[22:58:30] <GothAlice> Oh, readWrite does give that.
[22:58:31] <GothAlice> ^_^
[23:00:32] <GothAlice> collMod, compact, enableProfiler, indexStats, reIndex, repairDatabase, and validate, amongst others are dbAdmin/dbOwner, but dbOwner also gives user management roles on the target namespaces.
[23:00:36] <Freman> I have a couple of bugs in my zfs install (I allocated too much space to cache/log disk) and I need module upgrades...
[23:03:34] <Freman> I don't have an indexStats role
[23:03:57] <GothAlice> Those are "permissions" to commands given out by roles.
[23:04:30] <GothAlice> dbAdmin is one role that would grant that permission, as well as access to the "reIndex" command.
[23:16:19] <Freman> yay, mved old 1.4tb db over, create new one, add roles, make go, re-create indexes (because my particular app is dumb and doesn't... will add that in next release)
[23:16:37] <GothAlice> Freman: Be careful having indexes be automatically created.
[23:16:42] <Freman> best bit... if no-one asks for this old log data in the next few days we get to throw it out and prove my point about general purpose loging
[23:17:12] <GothAlice> If run in foreground, they'll block the collection. If run in the background, they could take a very long time to be active. (Scheduling index creation maintenance windows can be useful.)
[23:18:33] <Freman> meh, I was just going to make it go "oh, first record? add indexes"
[23:21:33] <GothAlice> There is overhead in an ensureIndex/createIndex call…
[23:26:04] <GothAlice> Oh, that explains a few things. mms-monitoring-agent can't find libsasl2.so.2 — I've got libsasl2.so.3.
[23:26:47] <GothAlice> Fixed and pinned. Silly upgrades.
[23:45:51] <Freman> I literally.... this morning I woke up, convinced it was friday... that should be reason enough to stay home on a wednesday
[23:46:56] <GothAlice> I haven't slept since the few hours I caught Sunday night.
[23:48:01] <GothAlice> I just wish the MMS interface didn't keep crashing the browser tab. :/
[23:48:08] <GothAlice> That dashboard chart view is intense.