PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 25th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[04:02:00] <Mrgoose> Hey quick question, I have a bunch of sensor data thats collected every 3 seconds. WIth mongodb could i create aggregate queries that would show the average every minute?
[04:07:21] <Ameo> Mrgoose: you can just select all data from the past minute, add up the results, and divide by the number of results returned :p
[04:07:46] <Mrgoose> all within a mongo query?
[05:40:18] <kurushiyama> Mrgoose: Aye, that is possible.
[05:40:33] <Mrgoose> ah nice
[05:40:39] <Mrgoose> im currently watching https://www.mongodb.com/presentations/mongodb-time-series-data
[05:42:33] <kurushiyama> Mrgoose: Well, good entry point. Do not get me wrong: MongoDB might well fit your use case, but if you have the time, I'd probably checl InfluxDB, too.
[05:42:46] <Mrgoose> ah cool
[05:44:11] <kurushiyama> Mrgoose: It has some advantages and disadvantages compared to MongoDB, ofc.
[05:45:22] <Mrgoose> looks interesting
[05:47:23] <kurushiyama> Mrgoose: From a very high level perspective, it is less flexible and very much less general purpose. The whole TICK stack combined, however, makes it very easy to get results. So you really have to decide. You can not use InfluxDB for anything else, basically.
[05:48:05] <Mrgoose> well my application is strictly for time series data, analytics
[05:48:31] <Mrgoose> just from what im looking at a quick glance , influxdb seems to be easier to get that done
[05:49:14] <kurushiyama> Mrgoose: I would not ditch MongoDB that easy.
[05:49:37] <kurushiyama> Mrgoose: It is _much_ better t dealing with _a lot_ of datapoints.
[05:54:11] <kurushiyama> Mrgoose: If you have the time, I'd simply check both
[05:54:23] <Mrgoose> yea, that's likely what i will do
[05:54:56] <Mrgoose> i will say the querying looks a lot easier in influxdb
[05:55:57] <kurushiyama> Mrgoose: The querying? Not so much,imho. Personally, I find the aggregations easier, but then, I use MongoDB for much longer.
[05:57:06] <Mrgoose> for example
[05:57:07] <Mrgoose> > SELECT MEAN(water_level) FROM h2o_feet WHERE time >= '2015-08-18T00:00:00Z' AND time < '2015-09-18T17:00:00Z' GROUP BY time(4d
[05:58:03] <kurushiyama> That is an aggregation
[05:58:19] <Mrgoose> correct, sorry. that's what i was referring to
[05:58:23] <Mrgoose> when i said looks a lot easier
[05:58:34] <kurushiyama> Mrgoose: More familiar, perhaps.
[05:58:42] <Mrgoose> that's fair
[06:03:47] <kurushiyama> A major drawback I see is that you almost always need another database for well, everything else like authentication and whatnot. I code in Go, so there is always boltdb embeddable, but it is a pita. With MongoDB, you only have one persistence technology.
[06:04:21] <kurushiyama> (well, not boltdb is a pita, but dealing with multiple datasources is).
[06:07:55] <kurushiyama> Mrgoose: ^
[09:11:59] <Ruphin> Anyone here? I have an issue with a node failing to rejoin a replica set. More specifically, the HostnameCanonicalizationWorker fails to resolve the hostname in the replica set config so it does not recognise itself as a member
[09:55:44] <dddh_> too many open files ;(
[12:42:29] <dddh_> hm
[12:42:48] <Onemorenickname> Simple question, but I don't find it on the doc
[12:43:08] <Onemorenickname> If I want to do some thing with the result of a .find in mongodb, I do db.collection.find(selector, cb)
[12:43:17] <Onemorenickname> But If I want to limit, where do I put the callback ?
[12:43:32] <Onemorenickname> like db.collection.find(selector).limit(nb).whereisthecallback ?
[12:44:37] <dddh_> cursor ?
[12:45:04] <dddh_> https://docs.mongodb.org/manual/reference/method/cursor.forEach/
[12:45:47] <Onemorenickname> dddh, i'd like to get an array
[12:46:39] <dddh_> https://docs.mongodb.org/manual/reference/method/cursor.toArray/
[12:46:53] <Onemorenickname> there is the syntax db.Post.find({}, {}, {limit:10}, cb), but I'd like db.Post.find({}).limit(nb)
[12:47:22] <Onemorenickname> dddh_, this is synchronous, isnt it ?
[12:56:52] <dddh_> Onemorenickname: using some framework like Mongoose?
[12:57:16] <Onemorenickname> dddh_, shall I deduce that it cant be done ? D:
[13:13:05] <kurushiyama> Onemorenickname: It depends. First, you should probably state what stack you are using instead of making us guess.
[13:15:00] <tembrae> Hello, I have a question. I have a mongo db with replication enabled and 2 full disks. I am wondering how to reduce the amount of storage used as a temporary solution untill we can upgrade disk space
[13:15:50] <kurushiyama> tembrae: OS, Version, MongoDB version, storage engine?
[13:17:33] <tembrae> kurushiyama: ubuntu 12.04 mongodb 2.4.6 with mmapv1
[13:18:02] <kurushiyama> LVM by any chance?
[13:18:07] <kurushiyama> tembrae: ^
[13:18:52] <tembrae> kurushiyama: unfortunately not
[13:21:56] <kurushiyama> tembrae: Well, you _could_ use --repair.
[13:22:04] <kurushiyama> tembrae: Be warned, however.
[13:23:55] <kurushiyama> tembrae: the other option would be to wipe dbpath , install the new hard disk, setup LVM and do a resync. You should only do this, however, when you are sure your current primary holds.
[13:24:07] <tembrae> kurushiyama: yes I have read about the repair and compact options but they do not seem like viable stategies
[13:24:51] <kurushiyama> Repair sort of IS
[13:24:56] <tembrae> kurushiyama: our primary is full too so that's not an option either
[13:25:33] <kurushiyama> tembrae: Well, another option of course would be to build an LVM besides it, copy the data over and use it
[13:29:48] <tembrae> kurushiyama: If you add a second server (with double the disk space) and add that as a replication set, how will this affect the existing replication?
[13:35:28] <kurushiyama> Since most likely your nodes are down because of the full disks, most likely not at all.
[13:35:31] <kurushiyama> tembrae: ^
[13:36:19] <tembrae> kurushiyama: you are right, of course
[13:41:38] <kurushiyama> tembrae: Hm.
[13:44:27] <Ange7> Hey all
[13:44:45] <Ange7> i try to find how to insert document if not _id exists else ignore is it possible ?
[13:46:43] <kurushiyama> tembrae: Ok, give the new server the IP of one of the old ones, CREATE AN LVM ;), copy over the data and config, fire up an MongoDB.
[13:50:48] <yopp> hey.
[13:51:08] <kurushiyama> yopp: Hoi!.
[13:52:16] <yopp> kurushiyama, re last time, I've played with value keys, and everything is just fine, but I'm trying to follow the array approach as you advised.
[13:52:35] <kurushiyama> yopp: You should ;)
[13:52:41] <yopp> But main roadblock is that $addToSet adds new element, because timestamps of events are not the same
[13:52:51] <yopp> So I'm getting duplicate items in array
[13:53:19] <kurushiyama> yopp: For TS data, if in doubt: flat documents, one doc per event.
[13:53:33] <yopp> not working
[13:53:47] <yopp> index size is way bigger than dataset
[13:53:59] <kurushiyama> yopp: Well, as you wish ;)
[13:55:30] <yopp> Huh. whish
[13:55:36] <yopp> •wish
[13:55:46] <yopp> I'm looking right now on the another project, where we did exactly that
[13:57:34] <kurushiyama> yopp: As said: you must do as you are pleased, but how an index of a fraction of the data should be bigger than the data itself eludes my comprehension, and I am not exactly a beginner.
[14:02:20] <yopp> kurushiyama, when you have small records, like around 64 bytes, even with primary index on _id you will have overhead of _id + reference to the page, which is already something like 24 bytes. Add timestamp index, and here you are : your data set is smaller than index size
[14:03:50] <kurushiyama> yopp: uhm... you are aware that ts are stored as int64?
[14:04:45] <kurushiyama> yopp: And objectIds, should you choose to use them, are internally stored as 12 bytes.
[14:07:39] <yopp> you are not just storing ids, you are storing the tree of references to the storage
[14:10:22] <kurushiyama> yopp: Well... what? Ok, you made up your mind, be it. But modelling data for constraints other then CRUD efficiency will impact said efficiency. And working against best practises (have a look at how dedicated TS databases handle it) should tell you something. But, be it, do as you please.
[14:15:55] <Ange7> i try to find how to insert document if not _id exists else ignore is it possible ?
[14:16:46] <cheeser> an update with only $setOnInsert ?
[14:17:10] <yopp> kurushiyama, ¯\_(ツ)_/¯
[14:17:39] <jokke> hi
[14:17:46] <kurushiyama> jokke: Hoi!
[14:18:15] <kurushiyama> jokke: How is it going?
[14:19:30] <jokke> i want to setup a sharded cluster with docker and i'd need to be able to call rs.initiate and sh.addShard in a way that it's run only once
[14:21:05] <Ange7> cheeser: is it possible to define update with only setOnInsert ?
[14:21:36] <kurushiyama> jokke: I would not do that. Use a docker container for MongoDB. then fire up the cluster via shell. You can script that, of course, but thing can too easily go south that way.
[14:22:19] <jokke> mh ok
[14:22:33] <yopp> cheeser, say I have a document: {_id: oid, events: [{_id: 1, a: DateTime, b: DateTime} ] }. Is there any way to tell $addToSet to check for just "events._id" for uniqueness? Say I'm doing "$addToSet": { _id: 1, a: { DateTime + 1} } .
[14:22:41] <kurushiyama> Ange7: What is wrong with https://docs.mongodb.org/manual/reference/operator/update/setOnInsert/ ?
[14:27:33] <kurushiyama> jokke: And usually, MongoDB is not as volatile. It might be worth if you compose services (which you rarely do, since more often than not, you do not want to have much more than MongoDB running on a production server)
[14:35:58] <cheeser> yopp: no
[14:36:16] <cheeser> it'll do a full document comparison on the value you're adding.
[14:36:30] <cheeser> Ange7: sure. i don't see why not.
[14:42:56] <Ange7> Ok, that's cool thank you
[15:08:59] <varunkvv> I ran into a crazy issue over the weekend. One of our applications was performing a query that hit the caveat: https://docs.mongodb.org/manual/core/index-intersection/#index-intersection-and-sort - this was on a large resulting dataset and the application side kept retrying after a timeout. This actually caused the mongodb server to crash with an "out of memory" error. Kind of a first for me.
[15:12:28] <kurushiyama> varunkvv: Huh? Swap filled?
[15:31:48] <jokke> hey again
[15:32:14] <jokke> i'm getting the error "database db_evaluation not found" when trying to enable sharding for the db
[15:32:22] <jokke> any ideas where that might be coming from/
[15:32:24] <jokke> ?
[15:32:48] <jokke> here's how i set up the cluster: https://p.jreinert.com/m-Jh4H9/
[15:33:52] <jokke> not sure how i can make sure the db exists
[15:34:03] <jokke> since there's no create database :P
[15:35:01] <compeman> hi there
[15:35:55] <jokke> hm i really have to add a document? :/
[15:42:04] <kurushiyama> jokke: How about adding an index for your shard key, which you are going to need either way?
[15:42:20] <kurushiyama> jokke: ;)
[15:42:44] <jokke> kurushiyama: what do you mean?
[15:42:53] <jokke> or which
[15:43:11] <kurushiyama> use yourdb; db.collectionToShard.createIndex({...})
[15:43:32] <jokke> do i have to createIndex for _id??
[15:44:14] <kurushiyama> jokke: You _definetly_ do not want _id as a shard key, assuming you use ObjectId...
[15:44:26] <jokke> no?
[15:44:30] <jokke> i use a compound index
[15:44:38] <jokke> and i use it as a hashed key
[15:44:50] <jokke> don't see a problem there
[15:46:11] <StephenLynx> what makes a good shard field?
[15:46:11] <kurushiyama> So yes, then you need to create db.collToShard.createIndex({_id:"hashed"})
[15:46:22] <jokke> aj
[15:46:24] <jokke> o see
[15:46:27] <jokke> *ah
[15:46:29] <jokke> *i
[15:46:50] <jokke> StephenLynx: something that splits the docs evenly across shards
[15:47:00] <StephenLynx> doesn't id do that?
[15:47:02] <StephenLynx> _id*
[15:47:32] <jokke> StephenLynx: well... it splits them too well
[15:47:38] <StephenLynx> ::::vvvvvvv
[15:49:58] <jokke> kurushiyama: i updated my script: https://p.jreinert.com/ssA5/sh
[15:50:06] <jokke> kurushiyama: but it still failes with the same message
[15:50:18] <kurushiyama> jokke: Sorry, working on sth different atm.
[15:50:29] <jokke> ok
[15:51:14] <Derick> StephenLynx: no, the _id would often just mean you'll end up writing to one shard only
[15:51:33] <Derick> you want the shard key to evenly distribute over the shards
[15:51:36] <Derick> (in most cases)
[15:52:19] <jokke> you might also want to do regional sharding or sth like that
[15:52:46] <StephenLynx> and what makes it distribute over shards?
[15:56:01] <StephenLynx> ok, so from what I read https://docs.mongodb.org/manual/core/sharding-shard-key/ _id is good IF you use it hashed for sharding?
[15:56:19] <kurushiyama> StephenLynx: It depends
[15:56:31] <StephenLynx> v:
[15:56:52] <kurushiyama> StephenLynx: The problem is that you basically distribute equally across the shards and have little to no control.
[15:57:14] <StephenLynx> but aside from the control, its good?
[15:57:17] <jokke> dafuq? https://p.jreinert.com/HX0q/
[15:57:18] <kurushiyama> StephenLynx: Think of location based sharding, and you want all EMEA customers on the EMEA shard and so on.
[15:57:48] <kurushiyama> StephenLynx: I have found it to be slightly less performant because of the transparent hashing.
[15:58:09] <jokke> i guess that's docker...
[15:58:29] <StephenLynx> >docker
[15:58:45] <StephenLynx> http://www.boycottdocker.org/
[15:58:58] <jokke> it's harder than i thought to set up a sharded cluster
[15:59:02] <StephenLynx> i saw docker was trash before finding this site, though
[15:59:38] <kurushiyama> jokke: It is extremely easy.
[16:00:00] <kurushiyama> jokke: But you should try to walk before you run.
[16:00:38] <jokke> i'm not running :)
[16:00:42] <kurushiyama> jokke: And setting it up is the easy part.
[16:01:28] <jokke> meh
[16:02:36] <kurushiyama> jokke: It is. Performance optimization and finding performance bottlenecks is... ...tricky, to say the least.
[16:02:54] <jokke> yeah sure
[16:03:03] <jokke> that's why we have jobs
[16:03:08] <jokke> so i'm not complaining
[16:03:13] <kurushiyama> jokke: Aye ;)
[16:17:13] <kurushiyama> Derick: The wire proto has changed, iirc. Was that between 2.6 and 3.0 or 3.0 and 3.2?
[16:17:34] <Derick> kurushiyama: both I believe
[16:18:54] <kurushiyama> Derick: :/ Thanks!
[16:24:51] <dddh_> lxc vs docker?
[16:44:24] <varunkvv> kurushiyama, replying to this: http://irclogger.com/.mongodb/2016-04-25#1461596482, we're _not_ using swap on the servers, but is that a mistake for WiredTiger? I know it was not a requirement for mmapv1.
[16:45:00] <kurushiyama> varunkvv: It always was a requirement to prevent Linux OOM-Killer from kicking in
[16:50:58] <varunkvv> Yeah, it's something to consider and try out. This is the first time I've had OOM killer hit mongo though :). Thanks for the pointer.
[18:07:40] <sp0_0n> Hello
[18:07:57] <kurushiyama> sp0_0n: Hoi!
[18:11:26] <sp0_0n> kurushiyama: Hey!
[18:11:41] <sp0_0n> So I ran into an error with pymongo today
[18:13:58] <kurushiyama> Sorry, no PyMongo here ;)
[18:15:57] <sp0_0n> https://ghostbin.com/paste/k93mf
[18:16:35] <sp0_0n> That’s the error I get whenever I try to access the collection
[18:16:52] <Derick> pymongo automatically creating indexes? that's surprising
[18:16:56] <sp0_0n> It’s a mongodb error really
[18:17:01] <kurushiyama> sp0_0n: Nope
[18:17:12] <sp0_0n> I get the same error in the mongo shell
[18:17:19] <kurushiyama> sp0_0n: Whut?
[18:17:51] <kurushiyama> sp0_0n: When you want to create the index, I assume?
[18:18:11] <sp0_0n> No when I want to make a find
[18:18:25] <Derick> that's not a find, that's index creation
[18:19:09] <sp0_0n> My first find is usually an error. But when I re-execute it, it succeeds
[18:20:15] <kurushiyama> sp0_0n: Can you pastebin exactly what you execute?
[18:21:29] <kurushiyama> sp0_0n: On the shell, that is?
[18:27:21] <sp0_0n> kurushiyama: I just had a light bulb moment. I really appreciate your help.
[18:28:15] <kurushiyama> sp0_0n: Well, I am always happy to help just by asking questions ;)
[18:28:30] <ams_> Anyone used HOSTALIASES with mongodb?
[18:28:54] <sp0_0n> Cool
[18:29:28] <kurushiyama> ams_: Uhm, could you go into more detail?
[18:30:36] <ams_> kurushiyama: I'm trying to create a replicaset, but mongodb isn't happy (I think!) that the hostname I'm giving it doesn't resolve to 127.0.0.1
[18:31:20] <ams_> So I'm trying to work around that with HOSTALIASES
[18:31:21] <kurushiyama> ams_: And how should the other servers reach somthing that actually resolves to 127.0.0.1? ;)
[18:31:39] <ams_> The problem is that it *doesn't* resolve to 127.0.0.1
[18:32:00] <ams_> It resolves to a service that the host and all other hosts can access
[18:32:17] <kurushiyama> ams_: If all replica set members get the config that they should contact the other members on 127.0.0.1, what would happen?
[18:33:02] <saml> ams_, what's your replicaset config?
[18:33:06] <kurushiyama> ams_: To put it another way: do not use the loopback interface to create a replica set, unless it is local only.
[18:33:20] <saml> make sure hosts listed in the config can be resolved from each member.
[18:34:25] <ams_> kurushiyama: I think you're misunderstanding me. I understand what a 127.0.0.1 is, I'm not trying to get different hosts to access different 127.0.0.1s
[18:34:37] <ams_> saml: http://pastebin.com/xQAq3rsM
[18:34:50] <ams_> is the config that mongo won't let me reconfig
[18:35:04] <saml> mongodba.infra.svc.cluster.local what does this resolve to on each member?
[18:35:13] <ams_> this is my error: http://pastebin.com/DDFwe4ww
[18:35:18] <ams_> sam1: 192.168.0.10
[18:35:20] <Derick> it needs to always resolve to the same IP address (and not 127.0.0.1)
[18:35:59] <saml> could you copy paste new config you want to use?
[18:36:07] <ams_> That is the config I want to use, sorry
[18:36:14] <ams_> at the mo it's "localhost:27017"
[18:36:33] <ams_> Derick: maybe I've misunderstood then. What causes that error ^ though? I don't have an interface on this server bound to 191.168.0.10. But traffic for that IP will be routed to that server.
[18:36:47] <ams_> (The reason for the contrived network set up is i'm trying to get this working in kubernetes/docker)
[18:37:00] <saml> what's output of command host mongodba.infra.svc.cluster.local
[18:37:29] <saml> did you start your mongod with rs config?
[18:37:46] <saml> replication: .. in yaml
[18:37:56] <saml> replication: replSetName: rs0 .. in yaml
[18:38:14] <ams_> https://www.irccloud.com/pastebin/oIytJ2aI/
[18:38:38] <ams_> sam1: not in config, but --replSet rs0 as a command line argument
[18:40:59] <ams_> I'm on 3.0.11 if that's relevant
[18:41:04] <ams_> So you guys would expect that to work then?
[18:41:25] <saml> so, you have a single machine, mongodba.infra.svc.cluster.local and want to set up replacaset with that one member?
[18:42:25] <saml> maybe try with "host": "mongodba.infra.svc.cluster.local:27017"
[18:42:41] <ams_> I'll have a bunch of machines, but for starters I want this one to work
[18:43:05] <kurushiyama> ams_: Bad choice
[18:43:11] <saml> mongo mongodba.infra.svc.cluster.local works?
[18:43:29] <ams_> saml: no, host resolves correctly. But it does not work.
[18:43:34] <ams_> https://www.irccloud.com/pastebin/rnV2tMKn/
[18:44:48] <saml> output of rs.status() and how to do you connect to mongod ?
[18:45:53] <ams_> https://www.irccloud.com/pastebin/e2ljNdKN/
[18:45:57] <Ryzzan> i want to $push to an array inside an array... let's say... {_id : Number, array : [{someProperty : String, anotherArray:[]}]}
[18:46:05] <ams_> sam1: ^ and I'm connecting `mongo 127.0.0.1`
[18:46:07] <Ryzzan> how to push to "anotherArray"?
[18:46:55] <saml> so you have replicaset already configured. maybe unconfigure replicaset (i don't know how to do this). and reconfigure with the hostname, not localhost
[18:47:22] <ams_> sam1: yeah i've tried that, same error. I don't see why that should have any impact?
[18:47:58] <saml> ams_, output of mongo mongodba.infra.svc.cluster.local
[18:49:03] <ams_> sam1: oooh! i'm wrong, that IP isn't working from this container... interesting
[18:49:07] <ams_> Ok, thanks, that's clearly my issue
[18:49:27] <saml> Ryzzan, db.docs.update({_id:1}, {$push:{'array.arr': 2}})
[18:49:45] <saml> {$push:{'array.anotherArray': THE VALUE}}
[18:50:05] <saml> container lol
[18:50:46] <Ryzzan> saml: ty... think i did it already, but let me check if I misstyped something... ;)
[18:59:55] <Ryzzan> saml: the _id would be enough to find the place to push the value... cause "anotherArray" is directly connected to "array"... got it?
[19:00:15] <saml> got it
[19:00:46] <Ryzzan> I need to find by array... i have an unique id in array, how would i filter it to get to anotherArray?
[19:01:19] <saml> i don't understand
[19:01:37] <saml> looks like your document model might not be optimal for kinds of updates you want to perform
[19:01:40] <Ryzzan> something like {_id : Number, array : [{uniqueId : Number anotherArray:[]}]}
[19:02:39] <Ryzzan> so, i want to push to this "anotherArray" related to an array with uniqueId
[19:03:04] <saml> you can try
[19:03:11] <Ryzzan> something like {_id : Number, array : [{uniqueId : Number, anotherArray:[]}]}
[19:03:20] <saml> {$push:{'array.uniqueId':1}}
[19:04:00] <saml> it's helpful to use test database, insert simple documents that represents your problem and try different updates
[19:04:09] <Ryzzan> ok
[19:04:15] <Ryzzan> gonna work on it
[19:04:16] <Ryzzan> ty
[19:05:22] <Ryzzan> i thought about db.collection.find({_id : 1, "array.uniqueId" : 1}).update({$push:{'array.anotherArray': THE VALUE}})
[19:05:28] <Ryzzan> do u think it would work?
[19:08:59] <saml> i thought it's db.collection.update({_id : 1, "array.uniqueId" : 1}, {$push:{'array.anotherArray': THE VALUE}})
[19:09:09] <saml> maybe find().update() works as well
[19:13:33] <kurushiyama> saml: I guess we are talking of Mongoose.
[19:16:01] <saml> ah i see
[20:34:27] <WiuEmPe> hello
[20:35:20] <WiuEmPe> i need to restore mongodb from backup. I dont know how this backup has been made, i have in tar files: metadata.json and bson
[20:35:34] <WiuEmPe> this is my first time with mongodb
[20:35:46] <WiuEmPe> anyone can tell me what i shoudl doing with this files?
[20:49:57] <StephenLynx> mongorestore
[20:50:09] <StephenLynx> probably those were made using mongodump
[20:50:18] <StephenLynx> WiuEmPe,
[20:50:43] <WiuEmPe> StephenLynx, yes, i restore this with mongorestore, thanks ;)
[20:56:31] <kurushiyama> WiuEmPe: It does not sound like a good idea to do a restore as the first task with MongoDB. Feel free to ask if in _any_ doubt.
[20:59:57] <WiuEmPe> kurushiyama, ;) thanks for help
[21:00:39] <kurushiyama> WiuEmPe: Well, StephenLynx helped you, I am just offering to help in case you need further assistance.
[21:00:45] <WiuEmPe> developer asks about user and password. This should be in backup?
[21:01:40] <StephenLynx> no
[21:01:50] <kurushiyama> StephenLynx: ?
[21:02:07] <StephenLynx> !
[21:02:08] <StephenLynx> :v
[21:02:13] <kurushiyama> ;P
[21:02:44] <kurushiyama> Uhm, users are not backed up in a dump?
[21:03:10] <assert> Users are stored in the admin database
[21:03:34] <kurushiyama> assert: Wrong assertion. You can store users everywhere you want.
[21:04:50] <assert> Not since auth schema v3
[21:05:20] <assert> All the user credentials (no matter which database) are stored in the admin db, if you backup the admin db, you backup all users
[21:05:58] <kurushiyama> Whut?
[21:06:48] <kurushiyama> assert: Have you a link to the docs? That is a _major_ change, imho
[21:07:24] <assert> https://docs.mongodb.org/manual/release-notes/2.6-upgrade-authorization/
[21:13:23] <kurushiyama> assert: Learning something new. Always, on any topic, every day. Now I am _really_ puzzled what the authSource / authenticationDatabase params are there for...
[21:18:24] <WiuEmPe> I like this... When noob ask for simple things, and hard users begining dispute ;)
[21:21:00] <kurushiyama> WiuEmPe: I have to admit that I only use authentication, and most of the time I simply stored system wide users in "admin", whereas I stored per DB users in the respective DB. Since I do my backups with LVM snapshots, I never noticed or had to know the exact position. However, for people doing backups with mongodump, this might make a huge difference.
[21:22:52] <WiuEmPe> kurushiyama, previous admin doing only backup data, dont thinks about system, LVM (what is lvm?!) and VM... And after crash both SSD we must setup system from nothing...
[21:23:42] <WiuEmPe> kurushiyama, I advise you use xfs and xfsdump - this is amazing!
[21:23:55] <kurushiyama> WiuEmPe: LVM = Logical Volume Manager.
[21:24:09] <kurushiyama> WiuEmPe: And I use xfs since... well, a long time ;)
[21:24:34] <WiuEmPe> kurushiyama, yes on servers setup by prevous admin i dont found any LVM, so i thinks that he dont hear about LVMs ;)
[21:24:58] <WiuEmPe> kurushiyama, so why not in mongo?
[21:26:04] <kurushiyama> WiuEmPe: LVM snapshots are suggested, and it does not make much difference for me. I guess this is because an LVM snapshot freezes the blockdev automagically.
[21:26:31] <WiuEmPe> kurushiyama, xfsdump doing this same
[21:28:35] <kurushiyama> WiuEmPe: Actually, I am not too sure about this. As per the xfs mailing list, xfsdump is not atomic.
[21:29:12] <kurushiyama> WiuEmPe: At least if we can trust Ian Rayner from SGI ;)
[21:29:22] <WiuEmPe> now i'm not sure about this ;)
[22:12:40] <GothAlice> Veeeeeeery interesting. Just had a pymongo app which lost all pool connections due to socket timeout go "full Balmer" with 140% CPU use doing nothing.
[22:14:57] <GothAlice> Go home, pymongo, you're drunk. XP