PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 28th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:43:25] <xmad> I need to upgrade a 400gb database from 3.0 to 3.2, ideally with no downtime. I looked into doing it with https://github.com/compose/transporter which supports oplog tailing but it has some issues and can't spare too much time fixing them. What are my options?
[02:18:40] <cheeser> xmad: are you using a replica set?
[08:30:24] <jokke> hi
[08:30:33] <jokke> i'm trying to aggregate some time series data
[08:30:46] <jokke> here's the code: https://p.jreinert.com/isT2m1/javascript
[08:31:10] <jokke> it fails with "unknown group operator 'r'"
[08:31:58] <jokke> any ideas what this means?
[08:32:22] <jokke> isn't it possible to calculate averages of nested values?
[10:06:08] <jokke> i made it work with this workaround: https://p.jreinert.com/ed2nA/javascript
[10:06:47] <kurushiyama> jokke: Sorry, just got up (long night). Lgtm, wouldn't have done any different.
[10:07:48] <jokke> kurushiyama: so $group can only work with flat documents?
[10:08:01] <jokke> or rather only produce flat documents
[10:08:34] <kurushiyama> jokke: Of the top of my head, with only one coffee yet, I'd say yes.
[10:09:36] <jokke> i have a question about $out. I want to feed the result of the aggregation into a collection. problem: $out overwrites the collection if it exists. If i have different retentions in my collections that's a problem
[10:09:54] <kurushiyama> jokke: you could probably try "value.r" and "values.i" as keys in the group stage, thinking of it.
[10:10:04] <jokke> kurushiyama: yeah i tried
[10:10:09] <jokke> doesn't work
[10:10:28] <jokke> basically i want to create 15 min aggregates from the data by averaging the values
[10:10:28] <kurushiyama> Uh, wait.
[10:10:55] <kurushiyama> So, why does overwriting be a problem, then?
[10:11:21] <jokke> because i want to keep a larger amount of 15 min aggregates
[10:12:11] <jokke> sth like 3 months worth of raw data, 12 months worth of 15 min aggregates, infinite amount of 1h aggregates or so
[10:12:19] <kurushiyama> jokke: Ah, so you want to downsample the data, and delete the original documents after the averages were created?
[10:13:09] <jokke> no the deleting i would do separately. for the first three months i'd have data in all three collections
[10:14:00] <kurushiyama> jokke: Hm, tricky.
[10:14:04] <jokke> indeed
[10:14:53] <jokke> i'll probably have to resort to insert_many with the result of the aggregation
[10:15:01] <kurushiyama> Basically, we are talking of two things here: downsampling and merging.
[10:15:13] <jokke> yeah
[10:15:26] <jokke> but i'd make sure there'd be no duplicates
[10:16:52] <kurushiyama> I'd probably have a collection for each granularity level and use $out for each level within the maximum retention time of the original data.
[10:17:25] <jokke> that's a lot of overhead
[10:17:26] <kurushiyama> As for "archive" data, you will be most likely better off when you do a double bulk
[10:18:11] <kurushiyama> jokke: You do not _have_ to, you can calc it on the fly. See the preaggregated data as a cache.
[10:18:31] <jokke> not sure i know what you mean...
[10:18:44] <kurushiyama> Ok, your max retention is 3 month
[10:19:07] <jokke> for raw data, yes
[10:19:45] <kurushiyama> Let's say you want a granularity of say 5minutes for the last day, 1h for up to a week, 1d for up to a month and 1w until max retention time.
[10:20:02] <kurushiyama> (just sample values)
[10:20:08] <jokke> that's the wrong way around
[10:20:14] <kurushiyama> Huh?
[10:20:28] <jokke> the lower the resolution of the data, the longer i want to keep it around
[10:20:35] <kurushiyama> Wait
[10:20:45] <kurushiyama> Doesnt matter for now
[10:20:48] <jokke> ok
[10:20:58] <kurushiyama> It is the granularity to display
[10:21:03] <jokke> yeah
[10:21:13] <kurushiyama> Not the data retention policy
[10:21:21] <jokke> k, i'm listening :)
[10:22:24] <kurushiyama> Ol, with this granularity and your max retention of the original data, you _could_ display everything up to 3 month with an "on-the-fly" aggregation
[10:22:58] <jokke> sure
[10:23:04] <kurushiyama> Great
[10:23:07] <jokke> it'd be slow as hell though :D
[10:23:36] <kurushiyama> Not necessarily, with proper early matches, good indexing and a good shard key, if applicable.
[10:24:03] <jokke> mhm
[10:24:04] <kurushiyama> Anyway, modelling first, optimization later
[10:25:24] <kurushiyama> So, an option would be to preaggregate the data every <insertMinGranularityHere>
[10:25:39] <jokke> yes
[10:27:01] <kurushiyama> which – you are right with that – would cause quite some overhead (albeit not as much I would guess, since we'd do an average on <MinGranularity> at least), but access to the refined data would be blazing fast, then
[10:27:30] <jokke> yeah
[10:28:59] <kurushiyama> Ok, we have dealt with the downsampling of the data for the maxRetentionTime
[10:29:15] <jokke> yeah
[10:31:47] <kurushiyama> So, the system ran for 3 month, everyone lived happily and now we want to have the data surpasing the max retention time aggregated for the granularity chosen for +3month, the result merged with the according collection and the original data deleted.
[10:32:48] <kurushiyama> Put this way, it is pretty obvious what to do.
[10:32:48] <jokke> yes
[10:33:00] <jokke> is it?
[10:33:02] <jokke> :)
[10:37:24] <kurushiyama> Get all documents with a var date = now - 3 month, build the average say for a calendaric month. For all documents each of the documents returned from the aggregation, find the according one in the +3m collection if it exists, insert otherwise.
[10:37:26] <jokke> i can still only think of not using $out but bulk inserting the results of the aggregation
[10:37:48] <kurushiyama> You dont, since it is not for merging.
[10:38:02] <jokke> yeah
[10:38:11] <jokke> ok so i wasn't that far off
[10:38:13] <jokke> i guess
[10:38:36] <jokke> does ordered: false bring any performance benefits with bulk inserts?
[10:38:39] <kurushiyama> Just slightly, actially.
[10:38:45] <jokke> hm ok
[10:38:53] <kurushiyama> nah, just slightly off
[10:38:59] <jokke> ah ok
[10:39:04] <kurushiyama> unordered is way faster than ordered.
[10:39:09] <jokke> ok good
[10:40:14] <kurushiyama> Just make sure you execute the bulk insert into the +3m collection _before_ you execute the bulk delete on the original data ;)
[10:40:38] <jokke> hehe yeah
[10:42:01] <kurushiyama> Because here is the trick I used once: Simply add the processed document '_id's to a result field for the aggregation you use for outdating the data, and use this result set to add deletes to a bulk delete.
[10:42:10] <moneybeard> Hmm wonder if I can speak now?
[10:42:20] <kurushiyama> Welcome, moneybeard!
[10:42:25] <jokke> moneybeard: hi
[10:42:32] <moneybeard> kurushiyama: Hey thanks!
[10:42:38] <moneybeard> jokke: Hey guys...
[10:42:54] <moneybeard> I have a very straight forward question...
[10:43:20] <moneybeard> Is it bad to take a vmware snapshot of a running primary mongodb node in a cluster....
[10:43:23] <moneybeard> ?
[10:45:57] <kurushiyama> moneybeard: Ok, I would never backup from a primary, for starters
[10:46:13] <kali> moneybeard: it will freeze the vm, so you will probably trigger a failover
[10:46:31] <moneybeard> Yeah that is exactly what happened but not sure if it failed over completely.
[10:46:40] <kurushiyama> moneybeard: kali is right, it will block, which is the reason why I'd do it from a secondary
[10:46:51] <moneybeard> Then node01 and 02 complained about needing a reload as disk.service was out of sync
[10:47:45] <moneybeard> kurushiyama: So do I need to do the reload to get things back in a good state or will they sync up on their own in time?
[10:48:20] <jokke> kurushiyama: how can i check whether my collection is sharded evenly
[10:48:58] <kurushiyama> jokke: sh.status(true) or db.collection.getShardDistribution(). As *cough* documented ;)
[10:49:05] <moneybeard> kurushiyama: The issue is I needed a snapshot of the OS cause I was upgrading some packages so it had to be the primary, so in that case I was thinking my best bet is shut node 1 primary down , let a new master take over , then do the snap shot on 1
[10:49:24] <jokke> kurushiyama: *cough* i knew that *cough*
[10:49:27] <jokke> :)
[10:49:55] <kurushiyama> Why does it have to be primary? You should always update a secondary first.
[10:50:00] <kurushiyama> moneybeard: ^
[10:51:02] <moneybeard> ^ I was patching system packages unrelated to mongo
[10:54:41] <kurushiyama> moneybeard: Doesn't matter. Do not interrupt a service without need. Since you can do the update on secondaries first, fire them up again and have the primary step down prior to updating, there is no need.
[10:55:36] <Industrial> Hi.
[10:55:42] <moneybeard> kurushiyama: True that. But tell me why this says is journaling is enabled it can be done live?
[10:56:15] <Industrial> I'm trying to get a mongodb replicaset working with docker.
[10:56:18] <kurushiyama> moneybeard: Whut?
[10:56:23] <moneybeard> From: https://docs.mongodb.org/manual/administration/production-notes/ It is possible to clone a virtual machine running MongoDB. You might use this function to spin up a new virtual host to add as a member of a replica set. If you clone a VM with journaling enabled, the clone snapshot will be valid. If not using journaling, first stop mongod, then clone the VM, and finally, restart mongod.
[10:56:36] <Industrial> I now have a shell script that does `echo rs.initiate(CONFIGHERE)` | mongo
[10:56:57] <Industrial> I can only run this once (and i did it wrong) and now every time it says the replicaset already exists
[10:57:01] <kurushiyama> moneybeard: clone != snaphot
[10:57:06] <Industrial> Even if I clear out the mongodb data directory
[10:57:13] <Industrial> How do I undo completely a rs.initiate() ?
[10:57:29] <moneybeard> kurushiyama: really according to vmware?
[10:57:38] <kurushiyama> Industrial: restart machines without replset option, remove "local" database
[10:58:27] <kurushiyama> moneybeard: iirc, a clone is a _cow_-snapshot of the original data, with a dependency on the original image.
[10:58:29] <moneybeard> kurushiyama: NVM I found my answer : Clone> An exact copy of a VM at a specific moment in time, although this is usually performed on a powered off VM (a clone of a running VM is called as snapshot
[10:58:43] <kurushiyama> moneybeard: Though I do not use VMs for MongoDB, anyway.
[10:59:21] <moneybeard> kurushiyama: Ok true that. Hey thanks for your advice and expertise.
[11:00:07] <kurushiyama> moneybeard: You are welcome!
[11:19:27] <jokke> do you guys know if i can get the total size of an index for a collection with the ruby mongodb driver?
[11:19:40] <jokke> *all indexes
[11:25:50] <FuzzySockets> What's the best way to seed some test data that has collections with references to other ObjectIds?
[11:30:04] <kurushiyama> FuzzySockets: you mean DBRefs?
[11:31:15] <FuzzySockets> kurushiyama: Not sure what mongoose is using under the hood
[11:31:43] <kurushiyama> FuzzySockets: Im out. I do not use Mongoose, and I strongly discourage to do so.
[11:33:57] <FuzzySockets> kurushiyama: What do you recommend using instead?
[11:34:01] <FuzzySockets> I'm on nodejs
[11:34:07] <kurushiyama> FuzzySockets: node driver.
[11:34:27] <kurushiyama> FuzzySockets: + your brain, ofc. ;)
[11:34:44] <kurushiyama> FuzzySockets: Here is the problem _I_ have with Mongoose.
[11:35:42] <kurushiyama> FuzzySockets: It forces an SQL approach to data modelling: Define entities and their properties + the relations between the entities, model them, than bang your head against the wall to get the data you need to implement your use cases.
[11:36:23] <kurushiyama> FuzzySockets: Basically, it makes you treat MongoDB like an SQL-ish object store. Which could not be farer from the truth, imho
[11:36:24] <FuzzySockets> kurushiyama: But using it doesn't tie you down to only using it
[11:36:53] <FuzzySockets> kurushiyama: I could see how that'd be a problem if you don't understand designing for a document store
[11:37:06] <kurushiyama> FuzzySockets: I fail to see any advantages. It is slower, it is bloated, and it abstracts away MongoDB's real strengths
[11:38:45] <kurushiyama> FuzzySockets: The slower and bloated part not from experience, but StephenLynx, who is usually rather active on the channel says so, and I take his word for fact.
[11:41:12] <kurushiyama> FuzzySockets: Imho, the proper approach to data modelling in MongoDB is to define your use cases, order them by how common they are, derive the questions you'd have to the data from them and create an optimized data model, with more common use cases having precedence over rarer ones when it comes to optimization.
[11:44:29] <kurushiyama> FuzzySockets: Not that this could not be achieved with Mongoose, but then you end up with an unnecessary abstraction layer.
[11:47:47] <FuzzySockets> kurushiyama: Speed of development is a higher concern for me right now. I already know mongoose syntax, and it'd be a fairly simple interface to replace if speed ever became an issue down the line.
[11:47:57] <FuzzySockets> But I'll definitely keep it in mind
[11:49:01] <kurushiyama> FuzzySockets: That is amassing technical debts. Just the other day there was some guy who used mongoose and now his application is getting some real momnetum, it fails to scale. I can understand your concern, just do not blame it on MongoDB ;)
[11:49:58] <kurushiyama> And the interface is not the problem: The data models produced by Mongoose are. And be aware of populate(), avoid it at all costs.
[11:49:59] <FuzzySockets> kurushiyama: I'm blaming it on my lack of knowledge on the native driver as well as already having 3-4 models written in mongoose. Besides, OOP principle 101, code to an interface, not an implementation.
[11:51:32] <kurushiyama> FuzzySockets: Well, you have been warned ;) Just be prepared that you'll need an ETL specialist later down the road.
[11:51:55] <FuzzySockets> kurushiyama: my application is actually fairly flat, and not incredibly write intensive
[11:54:22] <kurushiyama> FuzzySockets: You should be able to judge that better than I do. And I make a good part of my living from data-remodelling and (query) optimization, so I probably should not argue too much ;)
[11:55:22] <FuzzySockets> kurushiyama: Can't imagine you'd have much of a stance anyways considering you don't know what my application does ;)
[11:56:12] <kurushiyama> FuzzySockets: Well, I do not exactly starve. Go figure ;P
[12:01:46] <jokke> any idea what this might be about? https://p.jreinert.com/AMzlw/
[12:10:48] <scruz> hi
[12:11:01] <kurushiyama> scruz: Hoi!
[12:11:07] <scruz> using the aggregation framework, how can i count null values?
[12:11:12] <scruz> hey kurushiyama
[12:11:16] <scruz> how goes it?
[12:11:39] <kurushiyama> very well, thank you!
[12:11:56] <scruz> $ifNull returns the original value if it’s not null, so it isn’t useful in this case.
[12:13:15] <kurushiyama> scruz: Huh? Can you pasetbin your aggregation and one or two sample docs?
[12:14:49] <scruz> each document has a field called status, which could be null. i want to count how many statuses are null
[12:19:50] <scruz> so basically, i want to $group by null and $sum by 1 for all the documents
[12:20:31] <kurushiyama> scruz: http://hastebin.com/ojuyuyakug.coffee
[12:22:32] <scruz> can’t argue with that.
[12:25:16] <kurushiyama> scruz: Alternatively, if you want the count for all possible values of status, including _explicit_ null: db.checknull.aggregate({$group:{_id:"$status",count:{$sum:1}}})
[12:27:14] <scruz> thanks! i think i’m going to use a composite ID then
[12:27:36] <kurushiyama> scruz: As you wish. There is a caveat, though. Gimme a sec
[12:28:48] <kurushiyama> scruz: http://hastebin.com/qoqatecoxa.coffee
[12:29:36] <scruz> thanks. what’s the caveat?
[12:29:55] <kurushiyama> have a close look at the result and the {foo:"bar"} doc
[12:31:04] <scruz> it doesn’t have the status field and was included in the group for status == ‘other’
[12:31:29] <scruz> sorry, status == null
[12:31:40] <scruz> i think that’s okay for my use case.
[12:31:49] <kurushiyama> scruz: Nope, it was included in the group null, since non-existant fields by definition evaluate to null
[12:32:17] <scruz> we treat null as missing anyway.
[12:32:40] <scruz> and i want to get a count specifically for missing data apart from the other explicit statuses.
[12:33:49] <kurushiyama> scruz: Then you are actually using the convention to your benefit.
[14:38:59] <jokke> kurushiyama: i see some weird stuff with some of my schemas when stress testing the cluster. it seems that some values are lost: db.flat.count() => 180000, db.panel_aggregate.aggregate([{$unwind: '$values'}, ..., { $group: { _id: '1', count: { $sum: 1 } } }]) => { _id: 1, count: 162340 }
[14:39:07] <jokke> any ideas what might be happening/
[14:39:08] <jokke> ?
[14:39:20] <kurushiyama> See SERVER3645
[14:39:36] <jokke> hm?
[14:40:06] <kurushiyama> jokke: https://jira.mongodb.org/browse/SERVER-3645
[14:40:44] <jokke> no 180000 is definately the right count
[14:41:40] <kurushiyama> please show the complete aggregation, then.
[14:42:01] <kurushiyama> oh, and those are on two different collections.
[14:42:27] <jokke> yes
[14:42:48] <jokke> but i make sure to insert the same amount of "raw values"
[14:42:54] <jokke> the schemas are just different
[14:43:29] <jokke> flat is sth like { _id: { data_source_id: 'foo', timestamp: ... }, value: [1, 2] }
[14:44:29] <jokke> and panel_aggregate is sth like { _id: { panel_id: 'foo', timestamp: ... }, values: [{ timestamp: ..., data_source_id: 'foo', value: [1, 2] }, ...] }
[14:45:13] <jokke> here's the complete migration: db.panel_aggregate.aggregate([{$unwind: '$values'}, {$project: { _id: { panel_id: '$_id.panel_id', data_source_id: '$values.data_source_id', timestamp: '$values.timestamp' }, value: '$values.value' }}, {$group: { _id: '1', count: { $sum: 1 } } }])
[14:45:17] <kurushiyama> jokke: a) please make sure you have used the proper writeConcern. b) use pastebin if you paste code. c) In order to help you, I need what I request. I do not do this out of curiosity.
[14:45:25] <jokke> *aggregation
[14:46:22] <jokke> to a) i haven't thought about that. would you mind elaborating that a bit?
[14:48:24] <jokke> i didn't specify any write concern
[14:50:10] <kurushiyama> jokke: MongoDB does not just "loose arbitrary data". There can be _very_ special circumstances in which that happens, and _only_ if your writeConcern does not match your durability needs: https://docs.mongodb.org/manual/reference/write-concern/ Every single time somebody claims that MongoDB "looses data" it can be tracked down to a user's fault. As a matter of fact, the control over durability on MongoDB is much higher than
[14:50:10] <kurushiyama> with any other DBMS I have encountered, including Cassandra
[14:51:43] <kurushiyama> So, in order to debug what has happened, please do a .count() on both collections first
[14:53:17] <jokke> db.panel_aggregate.count() => 19
[14:53:33] <jokke> ahh
[14:53:40] <jokke> i think i know what the problem is
[14:53:48] <jokke> it's most certainly my code
[14:53:54] <kurushiyama> ;P
[14:53:59] <awal> what is `=>`?
[14:54:06] <jokke> nothing
[14:54:18] <jokke> it should say "results in"
[14:54:23] <kurushiyama> jokke: And _please_ read about writeConcern, readPreferrence and readConcern.
[14:54:36] <jokke> i will, promise!
[14:54:53] <jokke> but isn't w: 1 perfectly fine?
[14:54:54] <awal> `Promise.reject('sorry');` :)
[16:47:30] <kurushiyama> jokke: Sort of. If you have a replSet, you might want to make sure that the write made it to the majority of members. Reason: the member with the most recent data will be elected primary in a failover situation. With a wc:1, if an failover happens while the data has not made it to secondary yet, a so called rollback will happen when the member joins again.
[17:24:57] <saml> kurushiyama, given unstructured json objects, how would you insert them to mongo using mgo?
[17:28:07] <saml> never mind. i found https://godoc.org/gopkg.in/mgo.v2/bson#Getter
[17:45:23] <brucebag> anyone using 'hint' in their queries?
[17:45:49] <brucebag> seems that mongo is picking the wrong index for many of our queries
[17:48:21] <ramnes> I have a really weird behavior
[17:48:38] <ramnes> where a same connection command works differently on two computers
[17:48:43] <ramnes> one can connect perfectly
[17:48:56] <ramnes> and it prompt a "Enter password:" on the other one
[17:49:04] <ramnes> for a same server
[17:49:46] <ramnes> mongo hostname/database --username 'user_name' --password 'WeIrDpAsSwOrD' --eval 'rs.slaveOk()' --shell
[17:49:53] <ramnes> we both have the same mongodb version
[17:50:04] <brucebag> ramnes: is your mongod.conf the same on all machines?
[17:50:18] <ramnes> we are both trying to connect to the exact same machine
[17:50:23] <brucebag> i see sorry.
[17:50:48] <ramnes> actually, the computer who can't connect just can't connect on any server
[17:51:30] <ramnes> we thought it could be that mongodb was compiled without SSL support on that machine
[17:51:33] <ramnes> but no, there is SSL
[17:56:08] <ramnes> okay, this is really weird
[17:56:16] <ramnes> I can connect to a local database
[17:56:44] <ramnes> but if I add an user on that database and try to connect with that user/password combo, it still asks me a password
[18:01:37] <StephenLynx> localhost exception
[18:02:20] <StephenLynx> mongo behaves differently when you connect from a remote machine
[18:02:52] <ramnes> StephenLynx: on another machine it works perfectly
[18:03:06] <ramnes> and the first test was on a remote server anyway
[18:03:18] <StephenLynx> I think I missed something
[18:03:25] <StephenLynx> nvm then
[18:13:37] <kurushiyama> saml: I would try to prevent that at all costs ;)
[18:17:24] <ramnes> WTF
[18:17:30] <ramnes> --password=asdf works
[18:17:36] <ramnes> --password 'asdf' doesn't
[18:17:52] <Calinou> what about '--password asdf'
[18:18:04] <Calinou> (enclosing in quotes, to make Bash and MySQL happy)
[18:18:13] <Calinou> may not apply to other software/OS though :P
[18:18:19] <ramnes> MySQL?
[18:18:24] <ramnes> aren't we on #mongodb?
[18:25:01] <kurushiyama> ramnes: Maybe more like "#ComeToMeAllYouWhoAreWearyAndBurdenedWithDatabaseRelatedQuestions"?
[18:25:35] <kurushiyama> ramnes: btw: giving passwords as a param is not necessarily the best idea, either ;)
[18:26:55] <ramnes> the command line is generated (we have a lot of databases) and executed in a subprocess, it doesn't usually go in bash/history
[18:28:46] <ramnes> damn, I checking all the mongodb deps like ncurses, boost, etc
[18:28:52] <ramnes> everything is the same on both computers
[18:28:54] <ramnes> even readline
[18:28:58] <ramnes> what the heck
[18:29:27] <ramnes> s/I/I'm/
[18:33:09] <kurushiyama> ramnes: You are tyring a recompile?
[18:38:08] <ramnes> already tried multiple times kurushiyama
[18:39:05] <ramnes> tried 3.0.10, 3.2.4, and 3.2.5 IIRC
[18:39:09] <kurushiyama> to success or not? ;)
[18:39:12] <ramnes> nop
[18:39:16] <ramnes> same problem
[18:39:38] <kurushiyama> ramnes: What exactly? And why are you trying to compile, if I may ask.
[18:39:55] <ramnes> dev-db/mongodb on Gentoo
[18:40:17] <ramnes> which contains the database and the shell as far as I know
[18:40:58] <ramnes> (btw, that could be great to seperate both, so I don't have to recompile the whole DB each time :p)
[18:41:12] <ramnes> but I'm pretty sure it's a bad situation with something else
[18:41:27] <ramnes> because again, other computers with same versions don't have any problem
[18:41:49] <ramnes> but I don't know what library could be implicated in that problem
[18:42:12] <kurushiyama> Guess a build flag had unforssen consequences on one of the deps.
[18:42:23] <ramnes> we have the same flags on mongodb
[18:43:00] <ramnes> on boost and readline too
[18:43:03] <kurushiyama> The idea was more that a buildflag caused a dependency to be built in a way not expected by MongoDB.
[18:43:14] <ramnes> mhhhh
[18:44:22] <kurushiyama> Just a theory macthing the facts of your description, though.
[18:46:02] <ramnes> yeah yeah, looking that way
[18:46:04] <ramnes> that plausible
[18:46:08] <ramnes> that's*
[18:48:14] <kurushiyama> ramnes: Would be great if you found out the per package build flags necessary and document it ;)
[18:51:29] <ramnes> kurushiyama: https://gist.github.com/ramnes/c3522a69376bf692b2d82af552fd68ba
[18:51:46] <ramnes> does anything here looks like it could cause this problem?
[18:53:25] <kurushiyama> My gentoo experience is _ooooooold_. Not sure about the -ipv6
[18:53:54] <ramnes> there's no -ipv6 useflag on mongodb anyway
[18:54:04] <ramnes> s/-//
[18:55:00] <kurushiyama> And of course not about the impact of those buildflags on the dependencies. And an undefined buildflag does not necessarily mean "no impact"... ...iirc... *blush*
[18:55:14] <ramnes> of course
[18:55:29] <ramnes> but then mongodb doesn't have a lot of dependencies
[18:55:54] <kurushiyama> I think you have no choice but to check them
[18:57:25] <ramnes> that's *really* weird
[18:58:17] <ramnes> you know what
[18:58:21] <ramnes> let's try to remove -ipv6
[18:58:26] <ramnes> and see if any mongodb dep rebuild
[18:58:57] <Derick> ramnes: what's the build errors you get?
[18:59:05] <ramnes> Derick: no build error
[18:59:20] <Derick> then in which other way does it fail?
[18:59:34] <ramnes> --password=asd works
[18:59:41] <ramnes> --password 'asd' doesn't
[18:59:50] <ramnes> and that's just on one computer
[19:00:06] <Derick> build it with debug flags and do some GDB work?
[19:00:07] <ramnes> another computer with the same mongodb version (and same boost, readline, etc)
[19:00:14] <ramnes> doesn't have this problem
[19:00:28] <Derick> well, there must be a difference
[19:00:32] <Derick> Do the binaries work?
[19:00:43] <ramnes> what do you mean?
[19:00:50] <Derick> do our binaries work
[19:00:53] <Derick> the precompiled ones
[19:01:11] <ramnes> I didn't know those existed
[19:01:49] <Derick> https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.2.6.tgz
[19:01:54] <ramnes> thanks
[19:02:57] <ramnes> yep Derick
[19:03:23] <ramnes> it works
[19:04:10] <kurushiyama> Derick: Those are statically compiled, right?
[19:04:42] <Derick> yes - except for openssl I think (which is not a problem for the link above, as it doesn't have ssl compiled in)
[19:05:03] <ramnes> what libs are linked?
[19:05:27] <Derick> i don't know, sorry
[19:05:36] <kurushiyama> ldd ;)
[19:06:20] <ramnes> wow, last time I used ldd was like ten years ago
[19:06:32] <ramnes> but yeah, good point
[19:12:17] <ramnes> that's a bit cumbersome since ldd only list dynamicly linked libraries
[19:12:39] <Yuri4_> Hi, guys! I need to train transfering databases between servers. Anybody know where I can find a simple database, that I can train on?
[19:13:02] <ramnes> Yuri4_: https://docs.mongodb.org/getting-started/shell/import-data/
[19:13:15] <ramnes> https://raw.githubusercontent.com/mongodb/docs-assets/primer-dataset/primer-dataset.json
[19:14:17] <Yuri4_> ramnes, yeah, tried that one already. This is just a single .json file all you need to do is to press "import" and you are all done. Looking for a bit complex databse now.
[19:15:15] <ramnes> okay, then use mongo-smasher
[19:15:29] <ramnes> Yuri4_: https://github.com/duckie/mongo_smasher
[19:16:55] <ramnes> going back home
[19:17:07] <kurushiyama> Yuri4_: What exactly are you trying to achieve?
[19:17:15] <kurushiyama> ramnes: Have a nice one!
[19:17:35] <ramnes> kurushiyama, Derick, if you guys have any idea, don't hesitate to send me a PM and I'll take a look tomorrow
[19:18:00] <ramnes> shameless plug before quitting: https://github.com/ramnes/awesome-mongodb
[19:18:02] <Derick> ramnes: you can use nm to find objects
[19:18:15] <ramnes> I'll try that tomorrow Derick, thanks
[19:18:19] <ramnes> see you guys
[19:18:22] <Yuri4_> I have a freelance order to transfer databases. Probably will need to do that with remote connection to clients machine. Just wanna make sure that all goes smooth, don't do stupid mistakes and don't waste clients time.
[19:19:01] <kurushiyama> Yuri4_: Define "transfer" a bit more precisely, please.
[19:19:10] <Yuri4_> export-import
[19:19:16] <Yuri4_> from one server to another
[19:20:16] <Yuri4_> I haven't had any expirence with MongoDB, so a bit nerveous.
[19:20:32] <kurushiyama> Yuri4_: Is there preexisting data on the target server? Do you need to merge it? Is the target server a production server? Is any downtime planned/acceptable? Are we talking of a sharded environment, a replica set or standalones?
[19:21:41] <Yuri4_> kurushiyama, standalone server, downtime is not acceptable. New server is a fresh install.
[19:21:53] <Yuri4_> So, no merging is needed
[19:22:09] <kurushiyama> Yuri4_: As a word from one freelancer to the other: accepting a contract you do not have any clue of is... not exactly a quality mark. You _live_ of your reputation. If you sack that, that's going to be major problem.
[19:23:42] <kurushiyama> Yuri4_: Ok, now lets see how we prevent that.
[19:24:23] <Yuri4_> kurushiyama, I appreciate your input!
[19:25:08] <kurushiyama> Yuri4_: First: why is downtime not acceptable if the target instance is empty? I assume the source is production data then?
[19:25:31] <Yuri4_> kurushiyama, sorry my mistake.
[19:25:39] <Yuri4_> Original server shouldn't be affected.
[19:25:49] <Yuri4_> The target server is already down
[19:26:05] <Yuri4_> As soon as target server is setup, original server will be shut down
[19:26:41] <kurushiyama> Wait. What? Are we talking of a database migration from one server to the other, then?
[19:26:45] <Yuri4_> yes
[19:27:29] <kurushiyama> Ok, here is the problem: As long as the original server runs, it will get data, correct?
[19:28:06] <Yuri4_> kurushiyama, not sure what you are asking
[19:28:19] <Yuri4_> Exporting database from orignal server shouldnot affect uptime
[19:29:36] <kurushiyama> Yuri4_: The problem is that the data will change after you did an export.
[19:30:10] <kurushiyama> Yuri4_: So exporting/dumping/snapshotting is _not_ an option.
[19:31:17] <kurushiyama> Yuri4_: The only way to do a data migration without (or to be precise only minimal) downtime is to build a temporary replica set, consisting of the old server, the new server and an arbiter.
[19:32:22] <Yuri4_> kurushiyama, thanks for the input. Gonna take a break then set up a test lab. Cheers!
[19:35:33] <kurushiyama> Yuri4_: Wait
[19:35:54] <kurushiyama> Yuri4_: What MongoDB version runs on the original server?
[19:39:44] <Yuri4_> kurushiyama, it's 2.4.14
[19:40:15] <kurushiyama> Yuri4_: And on the target machine?
[19:40:31] <Yuri4_> 3.0.11 =)
[19:42:10] <kurushiyama> Yuri4_: At least, you should bother to read the release notes. Not going to work with the way I suggested. So either, you need a downtime, to make sure the data does not get changed after or while you take the dump/export, or you need to downgrade the target machine to 2.6 first
[19:43:25] <Yuri4_> kurushiyama, thank you. I'm very ashamed. I deserved this. Going to get my head straight now. Gracias.
[19:43:50] <kurushiyama> Yuri4_: We can talk you trough this.
[19:44:34] <Yuri4_> kurushiyama, i'd rather don't waste anyones time and be very thourgh. You are too kind.
[19:44:56] <kurushiyama> Yuri4_: So take it as a lesson learned. ;)
[19:45:25] <Yuri4_> kurushiyama, yes I will. You are the best!
[19:46:13] <kurushiyama> Yuri4_: Ok, the idea is to synch to the new server, to 2.6. As soon as this is synced, you shut down the original server and update it to 3.0.x
[19:46:49] <Yuri4_> kurushiyama, got it.
[19:47:43] <kurushiyama> Yuri4_: removing the contents of dbpath before the update. Then you wait until the data is synced back to the original server, where we are on the target version.
[19:49:16] <Yuri4_> Oh, by the way, what tool to manage database would you recomend? I currently use mongochef
[19:49:29] <kurushiyama> Yuri4_: Then, you can simply update the new server to 3.0, wait until the data from the original server synced, have it step down. Then, you can restart the new server as a standalone and you are done. With minimal downtime, although this process is going to take a while.
[19:49:36] <kurushiyama> Yuri4_: Yes. It is called shell.
[19:49:46] <Yuri4_> should've figured
[19:50:20] <Yuri4_> kurushiyama, thanks man!
[19:54:19] <kurushiyama> Yuri4_: So here is what has to happen _before_ the downtime: you need to set up the arbiter and the new server in 2.6, start them with the replset option as documented, but do _not_ initialize the replica set yet.
[19:55:41] <Yuri4_> kurushiyama, that's cool. I've done suimilar jobs with MySQL before.
[19:57:09] <kurushiyama> Yuri4_: Then, downtime happens. The mongodb connection URI of the application needs to be changed so that it connects to the replica set we are going to create during the downtime. In parallel, you restart the original server with the same replset option you started the new servers with and initiate the replica set. Then, you add the new server and the arbiter to the replica set. At this point, the application can be restar
[19:57:09] <kurushiyama> ted. Proceed as described above.
[19:58:26] <FuzzySockets> that's a lot of downtime ... sorry downtime !
[19:59:30] <Yuri4_> Alright, I got more than I need. Cherrs guys! Hope to see you again.
[19:59:36] <kurushiyama> FuzzySockets: is it? Doing an rs.initiate() and 2 rs.add() take seconds.
[20:00:53] <kurushiyama> FuzzySockets: if you have a better idea to minimize downtime for a migration from 2.4 to 3.0, I'd be happy to hear it!
[20:01:27] <FuzzySockets> kurushiyama: I was being facetious... you keep pinging some guy in here named downtime
[20:02:04] <downtime> grumble grumble
[20:02:08] <FuzzySockets> Bahaha
[20:06:14] <kurushiyama> downtime: I am so sorry. I did not know.... But you have to admit that this is not exactly an ideal nick in a channel like this... ;)
[20:08:06] <downtime> kurushiyama: I have no issue with pings :)
[20:10:30] <Yuri4_> https://xkcd.com/705/
[20:16:03] <kurushiyama> Sounds like me. On a good day ;)
[20:52:41] <tinylobsta> i'm using mongoose, and for some reason when i try to call methods such as findOne, findById, etc, on a model wrapper, nothing happens
[20:52:44] <tinylobsta> it doesn't even return error
[20:52:47] <tinylobsta> it just doesn't work
[20:52:59] <deathanchor> for 2.4 mongo, to change a member from arbiter to secondary, I have to remove then add the member again?
[20:53:15] <StephenLynx> i recommend you don't use mongoose, tinylobsta
[20:53:21] <tinylobsta> i have no choice right now :/
[20:53:27] <StephenLynx> v:
[20:53:30] <tinylobsta> it's baked into the system pretty thoroughly
[20:53:31] <tinylobsta> i agree though
[20:53:33] <tinylobsta> i'm never using it again
[20:53:37] <StephenLynx> RIP
[20:53:53] <StephenLynx> you could try implementing your stuff directly with the driver, though.
[20:54:21] <kurushiyama> deathanchor: a replica set reconfig should do the trick as well.
[20:54:45] <tinylobsta> i guess i could just wrap the methods i'm using from mongoose, huh
[20:55:57] <kurushiyama> StephenLynx: Do you have your results for Mongoose performance somewhere? Took over the "job" of discouraging the use of Mongoose over the day, thought it might be helpful...
[20:56:32] <deathanchor> kurushiyama: this made me doubt it: http://dba.stackexchange.com/questions/61023/can-arbiteronly-replica-in-mongodb-become-secondary-and-what-it-means
[20:56:36] <StephenLynx> nah
[20:56:37] <tinylobsta> okay so i shouldn't use mongoose, i agree
[20:56:42] <tinylobsta> but has anybody experienced this before?
[20:56:45] <StephenLynx> I just read someone that tested it somewhere
[20:56:51] <StephenLynx> never had the thing installed
[20:57:12] <deathanchor> kurushiyama: gist of the page it says you see this error with a reconfig: http://dba.stackexchange.com/questions/61023/can-arbiteronly-replica-in-mongodb-become-secondary-and-what-it-means
[20:57:15] <tinylobsta> i tried to move my models into a separate module and i'm requiring the module containing the models
[20:57:17] <tinylobsta> which is when everything broke
[20:57:21] <deathanchor> crap.. "errmsg" : "exception: arbiterOnly may not change for members"
[20:58:02] <StephenLynx> performance is just adding insult to injury, though
[20:58:36] <kurushiyama> deathanchor: As per the dba question: Neil, in his one of a kind style, pretty much summed it up.
[21:01:17] <kurushiyama> deathanchor: You might need to force the reconfig. However, if you feel uncomfortable with it, simply remove the arbiter and readd it. should take seconds.
[22:04:26] <oky> anyone using a mongo instance for analytics? (and have timestamped events in it?)