PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 2nd of August, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:07:25] <termina__> is there anyway to use mongodump to dump to another host? i.e. not locally?
[03:35:37] <Boomtime> @termina__: https://docs.mongodb.com/manual/reference/program/mongodump/#cmdoption--out
[05:00:33] <trq> /4
[05:20:25] <ronak> HI
[05:22:37] <ronak> I am facing issue in maintaing mongodb . I have a collection with 6 million records and two indexes. "count" : 6506235, avgObjSize" : 6848, "storageSize" : 11,
[05:23:01] <ronak> The above info is in GB. Can some one help me on this
[05:27:59] <Boomtime> @ronak: what's the issue precisely? as in, what do you perceive the issue to be?
[05:28:53] <Boomtime> @ronak: also, please provide the full stats() in a gist or something as there are other important fields, notably dataSize but also the storage-engine details may be important
[07:27:42] <KekSi> good morning -- does anyone have an idea for how long i can disconnect a replica-set slave?
[07:28:02] <KekSi> and can i re-connect it later?
[07:28:39] <KekSi> i'm trying to do something like this: restart a production mongo with --replSet, add a priority 0 secondary, disconnect it when it's up2date
[07:29:04] <joannac> you can disconnect it for as long as you want
[07:29:27] <joannac> or are you asking, "how long can I disconnect it and have it catch up when I connect again"
[07:29:49] <joannac> in which case the answer depends on the size of your oplog and the bandwidth b/t the primary and secondary
[07:29:56] <KekSi> disconnect as in remove, then shut the secondary down, take a filesystem snapshot, back that up, run tests with it, a few weeks later play back the snapshot and re-connect it to the old primary (with a higher-than 0 priority)
[07:30:33] <KekSi> i'd like to save myself a bit of time with this and not have to completely sync it up from scratch
[07:31:14] <KekSi> since the production mongo (standalone as of right now) is about 500-800GB
[07:32:32] <KekSi> i'm not entirely sure about the bandwidth (its a on-premise setup at a customer - could be 1gbit, could be in the low mbits.. you never really know)
[07:32:46] <joannac> what does "play back the snapshot" mean? restore from the snapshot you took?
[07:33:42] <KekSi> yes
[07:34:03] <joannac> again, depends on oplog size and bandwidth
[07:34:05] <KekSi> to flush all changes i made to it during testing
[07:34:20] <joannac> for a start, the oplog will need to cover however long the secondary is out of the replica set
[07:34:27] <joannac> which sounds like weeks?
[07:34:55] <joannac> so, if it's being written to a lot, it's probably not practical
[07:36:10] <KekSi> so to be save nothing goes wrong with the production db it's probably better to start from scratch after the testing period
[07:36:44] <KekSi> take the hit for a couple of hours of sync time (depending on bw)
[07:38:53] <KekSi> its potentially going to be about 3 weeks (if more things fail while we're moving from a standalone install to a distributed system it could potentially be longer)
[07:40:21] <KekSi> only other question i have is: does the production db still serve requests while the secondary is catching up? i have tested it some time ago but can't remember off the top of my head
[07:41:36] <joannac> yes, 1 primary + 1 node in recovering (catching up) -> primary can still take writes and reads
[07:41:43] <KekSi> or does it block until the secondar is actually SECONDARY (as opposed to STARTUP/STARTUP2)
[07:42:02] <KekSi> alright, thank you very much for your time
[08:27:49] <brunoaiss> Derick, did you read the example already?
[08:44:42] <lapy> Hi all
[08:44:56] <lapy> I would like to report a small issue on the French website of mongodb
[08:45:41] <lapy> who should I report ?
[11:02:14] <DKFermi> Hi folks -- can one of you please explain to me what bind_ip actually means for the mongodb-configuration? I'm using a flask-mongoengine app to communicate with the mongodb instance (which runs on a different server)
[11:02:32] <DKFermi> do i have to bind the ip of the mongodb to my flask server, or just to the loop-back interface of the host running mongodb?
[13:11:28] <RoyK> hi all. any idea there's some easy way to setup a mongodb cluster? Mostly for HA
[13:13:29] <phutchins> If I run an aggregation without a limit or match, ( something like db.frames.aggregate([ { $group: { _id: null, total: { $sum: "$size" } } } ]) ), will it run against all documents?
[14:51:55] <docmur> I'm trying to store some AES Encrypted data into my MongoDB, but I'm getting: A document was corrupt or contained invalid characters . or $
[14:52:17] <docmur> Is there a way to get over this? Maybe force the data into a binary format
[14:52:20] <Derick> you're using them as a key then
[14:52:27] <Derick> not as value
[14:52:39] <docmur> ah okayt
[14:52:42] <docmur> *ah okay
[14:52:48] <Derick> don't use values as keys!
[15:28:48] <phutchins> Anyone have any idea on the aggregation thing?
[15:29:06] <Derick> phutchins: it's better to ask specific questions
[15:33:33] <yopp> mmm
[15:35:13] <yopp> In the "Update" command in protocol, there a "updates" field, but it's array. Should.I expect multiple records there?
[15:36:15] <yopp> inside the array there a hash of regular `update` fileds, like "q", "u",
[15:43:17] <yopp> uh.
[15:43:18] <yopp> yeah
[15:43:22] <yopp> https://github.com/mongodb/mongo/blob/0aff7dc5e6a82de2bf41756e6bb8586c9ea0eeb3/jstests/sharding/write_cmd_auto_split.js#L104
[15:43:23] <yopp> awww
[15:50:12] <phutchins> Derick: I did...
[15:50:27] <phutchins> If I run an aggregation without a limit or match, ( something like db.frames.aggregate([ { $group: { _id: null, total:{ $sum: "$size" } } } ]) ), will it run against all documents?
[15:50:43] <Derick> yes
[15:50:52] <phutchins> Trying to determine if I run an aggregation like the above, does it go across all docs or just a subset like .find does
[15:50:56] <phutchins> ok so all docs... cool :)
[15:51:05] <Derick> it has to go through all - you don't tell it not to
[15:51:20] <despai> hello guys. Anyone knows how can I group an aggregate by a specific number of results? Basic I have an aggregate Mongo.aggregate([ { $match: { "items": { $exists: true, $eq: { $size: 0 } } }, $group: .... <-- what should I type there? I need to get them grouped by a specific number of results, I have the number in a variable
[15:51:36] <Derick> despai: you can't group in your match
[15:51:44] <despai> why not?
[15:52:06] <Derick> it needs to be a separate stage
[15:52:47] <despai> ah sorry
[15:52:51] <despai> I forgot some curly braces
[15:52:57] <despai> it's , { $group: ...
[15:53:02] <Derick> ah, ok
[15:53:04] <despai> but I don't know how to group by specific number of results
[15:53:34] <Derick> despai: show a set of sample documents, and what you want as output of the aggregation in a pastebin please
[15:53:45] <despai> ok
[15:53:46] <despai> 1 sec
[16:01:08] <despai> Derick http://pastebin.com/nMEgJn9B
[16:02:10] <despai> so basically I need to get groups of records that match with a specific query BUT only retrieve groups of minimum N records
[16:02:28] <despai> if the query finds only 1 record and I want to group by 2 it shouldn't return anything
[16:02:38] <despai> only groups of at least n number of records matching the query
[16:02:49] <tinco> hey guys, I was doing a resync of a corrupted replica, and after 4 hours it only had synched 30gb, and the cluster was so slow our message queue started backing up
[16:02:52] <Derick> gimme a few despai
[16:02:57] <despai> yes, thanks
[16:04:26] <tinco> is resynching supposed to be so slow? it's about a TB of data, so it's not going to finish anytime soon this way, we can do stop the db and do it manually using rsync, but the cluster would go down for a few hours
[16:05:08] <Derick> despai: I don't see what the group result is based on there
[16:05:27] <Derick> I got to go now though. Home time!
[16:09:54] <despai> hey
[16:10:04] <despai> it's grouped by number of results I told you
[16:10:08] <despai> I need them grouped by 2
[16:11:35] <despai> anyone else? How can I group an aggregate by specific number of results ? Mongo.aggregate([ { $match: { "items": { $exists: true, $eq: { $size: 0 } } }, $group: .... <-- what should I type there?
[16:12:00] <despai> I mean. Users.aggregate([ { $match: { "items": { $exists: true, $eq: { $size: 0 } } }, $group: .... <-- what should I type there?
[16:12:14] <despai> give users with empty items array but only grouped by 2 results
[16:12:23] <despai> I'm turning crazy with this...
[16:29:37] <amritoit> hi
[16:29:59] <amritoit> I am stuck with morphia date conversion in Java.
[16:30:53] <amritoit> I am not getting the correct date stored in mongodb. Any pointer?
[17:19:14] <RoyK> hi all. any idea there's some easy way to setup a mongodb cluster? Mostly for HA
[17:28:40] <yopp> um
[17:28:51] <yopp> re `bypassDocumentValidation` inside update
[17:29:17] <yopp> is it set per `updates`, or per command?
[17:31:06] <yopp> same for writeConcern. in the command it's on the command level, no on the update contents
[17:31:25] <yopp> but for example in the shell it's stated that it should be specified for each `updates` element
[17:42:14] <jkhl> Is it possible to sort the actual mongodb data (by some field or function)?
[17:58:11] <ExoUNX> greetings
[17:58:30] <ExoUNX> can anyone sell me on NoSQL/MongoDB?
[17:58:41] <ExoUNX> I'm currently a MariaDB user
[18:00:38] <StephenLynx> no.
[18:00:45] <StephenLynx> you should use what the situation asks.
[18:00:57] <StephenLynx> if your case doesn't ask for mongo, you shouldn't use mongo.
[18:01:08] <ExoUNX> well it could be both
[18:01:19] <StephenLynx> naturally.
[18:01:38] <ExoUNX> however I receive my data in json
[18:01:46] <ExoUNX> so it seems like NoSQL would be more fitting
[18:01:52] <StephenLynx> no.
[18:01:58] <StephenLynx> no relation at all.
[18:02:38] <StephenLynx> I wouldn't chose a database based on that.
[18:02:45] <StephenLynx> what you should ask yourself is:
[18:03:04] <StephenLynx> how much do you need transactions? is your data too relational? how are you going to query said data?
[18:03:27] <ExoUNX> How's php with MongoDB?
[18:03:29] <StephenLynx> querying dictates 90% of why you need a certain database (number pulled out of my ass)
[18:03:39] <StephenLynx> it has a well maintained driver for mongo.
[18:04:01] <StephenLynx> so I assume it works ok.
[18:04:07] <StephenLynx> as far as PHP works anyway :^)
[18:04:08] <ExoUNX> is "nosql injections" a thing?
[18:04:12] <StephenLynx> yes.
[18:04:23] <StephenLynx> less than with SQL though.
[18:04:26] <StephenLynx> but still possible.
[18:04:46] <ExoUNX> I guess it depends on the language and driver too righT?
[18:04:48] <ExoUNX> right*
[18:05:13] <StephenLynx> hm
[18:05:18] <StephenLynx> a little.
[18:05:23] <StephenLynx> but not entirely.
[18:05:50] <StephenLynx> if your language allows you to directly handle JSON then its easier to allow an injection
[18:06:15] <StephenLynx> but the language doesn't make it impossible.
[18:08:54] <ExoUNX> well I'm just trying to keep everything clean and optimized and I like learning new things
[18:09:04] <StephenLynx> and that's good.
[18:09:12] <StephenLynx> but keep in mind mongo is not a silver bullet.
[18:09:16] <n1colas> Hello
[18:09:18] <ExoUNX> right
[18:09:19] <StephenLynx> its just a different db for different cases.
[18:09:32] <StephenLynx> it doesn't replace RDB's strengths.
[18:09:44] <ExoUNX> it's like thinking nginx is a silver bullet to apache in terms of performance
[18:10:14] <StephenLynx> meh
[18:10:17] <ExoUNX> :D
[18:10:24] <StephenLynx> I don't use webservers anyway.
[18:10:27] <StephenLynx> so I wouldn't know.
[18:10:40] <ExoUNX> yah I generally work on web apps
[18:10:40] <StephenLynx> I think they are all bloat.
[18:11:19] <ExoUNX> though I have written some software in C/python
[18:11:25] <StephenLynx> so do I, but I use its own runtime environment to handle HTTP.
[18:11:37] <StephenLynx> in my case, node.
[18:11:38] <ExoUNX> smaller footprint
[18:11:53] <StephenLynx> i used enough C to never write a web app in C.
[18:12:01] <ExoUNX> StephenLynx lol of course
[18:12:11] <Derick> I did. But it was required as part of a uni course
[18:12:20] <ExoUNX> node.js is one my next projects to tackle
[18:12:31] <StephenLynx> I strongly recommend node.
[18:12:32] <ExoUNX> though I want it mature some like php
[18:12:38] <StephenLynx> kek
[18:12:44] <StephenLynx> PHP was never mature.
[18:12:47] <StephenLynx> it just got old.
[18:12:51] <ExoUNX> StephenLynx lol
[18:12:58] <StephenLynx> and node does have already matured.
[18:13:05] <StephenLynx> and already has a LTS.
[18:13:14] <ExoUNX> StephenLynx node.js is still using userland in crypto, still has more to learn ;)
[18:14:23] <ExoUNX> StephenLynx but I'll keep on-topic :P
[18:14:41] <ExoUNX> Are there any good resources on learning MongoDB?
[18:15:14] <ExoUNX> also does MongoDB support TLS (I'd assume yes, but better ask)
[18:15:24] <StephenLynx> yes, it does.
[18:15:31] <ExoUNX> StephenLynx easily?
[18:15:37] <StephenLynx> I think.
[18:15:50] <ExoUNX> StephenLynx like configure the tls settings and open the port?
[18:16:01] <StephenLynx> it used to ship by default only on paid versions, I think, but I think now it is by default on every release.
[18:16:05] <StephenLynx> I never used it so I don't know.
[18:16:27] <ExoUNX> is base MongoDB fully open source?
[18:16:33] <StephenLynx> free.
[18:16:34] <StephenLynx> yes.
[18:16:38] <ExoUNX> FOSS specifically?
[18:16:41] <StephenLynx> yes, free.
[18:16:46] <StephenLynx> not just open source.
[18:16:51] <ExoUNX> lame
[18:16:54] <StephenLynx> kek
[18:16:56] <StephenLynx> why?
[18:17:05] <StephenLynx> free is more open than open source.
[18:17:21] <StephenLynx> its AGPL
[18:17:58] <ExoUNX> StephenLynx can I view and edit the source code within reason?
[18:18:15] <StephenLynx> yes.
[18:18:17] <StephenLynx> you can even fork.
[18:18:21] <StephenLynx> and sell the fork.
[18:18:34] <StephenLynx> free means free.
[18:18:43] <StephenLynx> read on the 4 software freedoms.
[18:18:47] <ExoUNX> StephenLynx I'd consider that FOSS lol
[18:18:54] <StephenLynx> I just call it free.
[18:18:59] <StephenLynx> open source is meaningless to me.
[18:19:07] <StephenLynx> since open source might still be proprietary.
[18:19:26] <ExoUNX> StephenLynx and Google made MongoDB?
[18:19:44] <StephenLynx> no.
[18:19:50] <StephenLynx> 10gen.
[18:19:58] <StephenLynx> but they renamed to mongo or something.
[18:20:06] <Derick> We're MongoDB Inc. now
[18:20:10] <StephenLynx> that.
[18:20:21] <ExoUNX> yah just checked the wiki page
[18:20:24] <ExoUNX> cool
[18:20:52] <jkhl> not possible with the mongo api to sort the mongodb data on disk then?
[18:23:42] <ExoUNX> Derick oh wow, MongoDB Inc looks pretty big
[18:24:41] <ExoUNX> and would the MongoDB University be a good learning source?
[18:25:53] <ExoUNX> and one last thing
[18:26:13] <ExoUNX> is there a stable/well-maintained repo for CentOS 7?
[18:30:02] <StephenLynx> yes.
[18:30:10] <StephenLynx> look for mongo's own repository.
[18:30:19] <StephenLynx> never had any issues.
[18:30:33] <ExoUNX> StephenLynx you use CentOS?
[18:30:38] <StephenLynx> yes.
[18:30:44] <ExoUNX> ok
[18:30:54] <StephenLynx> just use their RHEL repository.
[18:33:39] <Derick> ExoUNX: MongoDB Uni is pretty good I've heard. And we're quite a few people now!
[19:17:23] <docmur> I have the following encrypted database information: http://pastie.org/10927371 I have the following C code, trying to read that information: http://pastie.org/10927373, The problem is I'm getting (null) from the printf of str. Does the mongo C driver have a function to get the information from the database?
[19:28:25] <ExoUNX> thanks for the assistance
[19:57:08] <ralfbergs2> Hi there.
[19:58:24] <ralfbergs2> I know nothing about MongoDB, but I need to design a highly-available environment in Amazon Web Services (AWS) for MongoDB to run... My plan is to have 3 nodes, master, slave, arbiter. My problem is: My region in AWS only has two so called "availability zones." Theoretically one availability zone can fail, and then (in the worst case) all instances hosted in that zone are defunct.
[19:58:40] <ralfbergs2> So my plan is to host "master" in one availability zone, "slave" and "arbiter" in the other. Does that sound correct?
[19:59:16] <ralfbergs2> Should the zone with the master go down, slave and arbiter would make "slave" the new "master".
[19:59:50] <ralfbergs2> Should the zone with slave and arbiter go down, then what? Would Master become slave as it can't confirm it should remain master?
[20:00:12] <ralfbergs2> If that is true, how can I solve this dilemma in this scenario?
[20:18:02] <ralfbergs2> Is there anyone who can help me? HA installation of MongoDB in AWS with only two availability zones, master, slave, arbiter...
[21:06:57] <ralfbergs2> Is there anyone who can help me? HA installation of MongoDB in AWS with only two availability zones, master, slave, arbiter...