PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 30th of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:36] <joannac> TheEmpath: well, clearly your query is wrong
[00:00:50] <joannac> but yes, if no documents match, then no documents are updated
[00:00:51] <TheEmpath> i'm just doing what the documentation says
[00:02:40] <joannac> show me the actual gig id, and the document that should match that
[00:22:50] <TheEmpath> one moment, everything is on fire and, at always, at the same time
[00:58:55] <TheEmpath> joannac: needed the new ObjectID in the JS to make it work :D
[09:56:19] <lpghatguy> Hey! I'm running the Ubuntu 14.04 install guide verbatim on a totally fresh Ubuntu system and the init.d script is failing to install
[09:56:59] <lpghatguy> I was able to pull in the script from the repo's 3.2 branch and have everything work, but it seems like a workaround I'd avoid for provisioning future machines
[09:58:00] <lpghatguy> (of course, without /etc/init.d/mongod, `service mongod start` gives an unrecognized service error)
[09:58:10] <lpghatguy> What steps should I take to diagnose the issue?
[10:15:29] <lpghatguy> I reduced a Dockerfile down to this and was able to reproduce it: http://hastebin.com/eqafasuxuh.avrasm
[10:15:49] <lpghatguy> Don't seem to be any shady permissions or default config issues in the Docker repository Ubuntu 14.04 image
[10:52:19] <fish_> how can I verify that a replica node catched up sucessful?
[10:53:33] <fish_> specificially I'm looking for a reliable indicator that adding a new replica was succesfull
[10:55:56] <fish_> rs.status()["myState"] 1 or 2?
[12:20:36] <Petazz> cheeser: Which projects should I submit this to? BI tool feature requests?
[12:20:55] <Petazz> That is not exactly a driver..
[12:25:12] <Petazz> I put a simple ticket there. For security's sake I did not put any info on what the collections contained
[13:28:12] <cheeser> Petazz: great. thanks. we might ask for more detail if we can't recreate it but we'll do our best.
[13:29:07] <Petazz> Sure. In that case I can try to find time to create a testcase
[15:19:34] <Entrio> Hello
[15:55:21] <vagelis> Hello, i use mongodb 3.0. Lets say a document = {'a': 1, 'b': 2}, can i query like that: db.collection.find({'a': {'$ne': 'b'} })
[15:55:30] <vagelis> (I use pymongo also)
[15:56:07] <vagelis> If not, then is it possible to query between 2 fields of the same document? I mean compare those 2 fields?
[15:57:38] <kali> vagelis: you would need a $where clause, but in most cases, you don't want to use that
[15:59:43] <vagelis> i dont understand :S I mean i have to find documents where those 2 fields shoulnt have the same value. Is there a better way to do it?
[16:02:56] <kali> vagelis: this pattern can't be indexed, so finding a small number of documents where these fields are equal in a big collection is inefficient. all alternatives boil down to scanning the collection
[16:03:15] <kali> vagelis: one efficient option is to store the difference of a and b
[16:03:24] <kali> vagelis: and index that field
[16:03:31] <vagelis> Ah ok cool well i need it only for tests where i have like 10 documents or so
[16:03:54] <kali> well, in that case $where will work
[16:05:11] <vagelis> thanks!
[16:43:26] <entrio> Hi all
[16:43:38] <entrio> a got a question that i hope someone can help me with
[16:45:27] <entrio> http://pastebin.com/tC0x3CZX
[16:45:37] <entrio> Sorry for the link, the question is too long to paste here
[16:45:59] <entrio> if someone can help me with that id be much obliged
[17:39:54] <livcd> i am getting /usr/bin/mongod: /usr/lib/libstdc++.so.6: no version information available (required by /usr/bin/mongod
[17:41:34] <livcd> i am also getting Failed to obtain address information for hostname 7ba0b3bc2164: Name or service not known inside the container
[18:22:22] <Gasher> Hi everyone.
[18:22:47] <Gasher> I'm totally new to Mongo and have no idea what I'm doing, but it looks like it doesn't work to me: http://pastebin.com/Zb10X0AV
[18:32:41] <Lujeni> Gasher, read your log :)
[18:32:42] <Lujeni> 2015-12-30T19:14:21.587+0100 [initandlisten] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:27017
[18:33:38] <Lujeni> Gasher, $ netstat -plantu |grep 27017 , u must change your port for example or maybe an another instance of mongodb is running
[19:07:30] <livcd> how can i make mongodb recognize localhost (docker container ?
[19:32:43] <tsturzl> So I'm having trouble wrapping my head around a geographically redundant replica set
[19:34:10] <tsturzl> If I have my primary, 2 secondaries, and an arbiter in site A; and a single secondary in site B. Home does the secondary in site B become primary when no members can elect it?
[19:34:25] <tsturzl> How*
[19:35:21] <Derick> the one in site B can't become primary unless it can see at least 2 more nodes (arbiter or not) in site A - as you need more than 50% available nodes
[19:35:49] <tsturzl> @Derick, then why is it that MongoDB suggests such a setup?
[19:36:19] <Derick> I don't know, I haven't read these docs
[19:37:13] <tsturzl> And with that stated, how would it be possible to failover between 2 datacenters? Seems like the only reasonable way would be to have more than half of my secondaries in my backup location
[19:37:30] <tsturzl> which isn't very convenient or cost effective
[19:37:43] <Derick> you need at least an arbiter in a third data centre for that to work
[19:38:04] <tsturzl> I suppose that can be done
[19:38:21] <Derick> 3 nodes in A, 3 nodes in B, 1 node in C
[19:38:58] <Derick> with the one in C an arbiter... and possibly 1 of 3 in A and B too
[19:39:20] <tsturzl> Hmm, might have to reconsider this architecture
[19:40:33] <tsturzl> Strange that they recommend 3 node and 1 arb in site A, and 1 node in site B. What advantage is even gained from that. It really only protects against the possiblity that site A would have lost its data
[19:40:45] <Derick> yeah - link?
[19:41:21] <tsturzl> There's maybe 2 or 3 links, but I think this is the most detailed
[19:41:21] <tsturzl> https://docs.mongodb.org/v3.0/tutorial/deploy-geographically-distributed-replica-set/
[19:42:31] <tsturzl> I just feel its missing details of what the benefit of each recommended setup is
[19:43:58] <Derick> tsturzl: yes, I agree. I'll file a DOC ticket
[19:44:24] <tsturzl> and how to geo-graphically distribute your infrastructure, which I understand might be a very broad and cumbersome topic but I've found very little information on it. And while it may have been silly for me to assume, I thought there might be some kind of specific strategy for this
[19:44:49] <tsturzl> So that site B could become a primary without the need for a third data center and all these extra boxes
[19:45:04] <Derick> that can't work, as you need a majority of nodes visible
[19:45:18] <Derick> so if you have 3 in A, and 3 in B, you need 4 available nodes
[19:45:27] <Derick> meaning that if either A or B go down, you don't have that
[19:45:33] <Derick> only way to fix that is an extra node in C
[19:45:34] <tsturzl> Yeah, it was silly of me, I'd just assumed they have figured out some magical way to do it that I'm unaware of.
[19:45:46] <Derick> sorry - we don't do magic :D
[19:46:02] <tsturzl> yeah, no I'm the silly one
[19:46:39] <tsturzl> So site C has an arbiter, what if the other 2 nodes I put in site B were also arbiters?
[19:47:00] <tsturzl> It would work, but is there any reason that would be silly?
[19:48:09] <Derick> tsturzl: https://jira.mongodb.org/browse/DOCS-6889
[19:48:29] <tsturzl> So A: 3 Nodes 0 Arbs, B: 1 node, 2 arbs, C: 0 nodes, 1 arb?
[19:48:33] <Derick> tsturzl: then data centre B would not have a replica at all - which is a little scary
[19:49:01] <Derick> tsturzl: in that case, you probably want to bumb the priority of the A nodes to 2 and leave the B node at 1?
[19:49:30] <tsturzl> We're not actually geo-distributing work
[19:49:46] <tsturzl> So site B is really only used in the case site A is unavailable
[19:50:04] <Derick> tsturzl: yeah, but if A dies by fire - you suddenly have no backup data node at all
[19:50:37] <tsturzl> A has rolling backups
[19:50:44] <tsturzl> Site A has rolling backups*
[19:50:50] <Derick> but A just died by a comet impact
[19:51:04] <tsturzl> Right, off site backups
[19:51:05] <Derick> and then a hard drive in the data node in B starts failing...
[19:51:50] <tsturzl> right
[19:52:25] <tsturzl> There would still be up to a 3 hour gap in our data
[19:52:35] <tsturzl> at very worse
[19:52:56] <Derick> which can be easily fixed by having two data nodes in B
[19:53:02] <tsturzl> Which might be manageable, because its still better than having nothing at all
[19:53:33] <tsturzl> I'm just concerned with cost, because I've been urged to keep the backup site minimal
[19:53:43] <tsturzl> Most of our services aren't even spun up until needed
[19:54:49] <tsturzl> You have a good point though. Just not sure if $60+ a month is worth it for commet insurance ;)
[19:54:59] <tsturzl> Sounds like a hoax
[19:55:05] <tsturzl> kidding
[19:55:21] <Derick> hehe
[19:55:53] <tsturzl> I'll talk it over with my team, thanks for the advice
[19:55:59] <Derick> you could also do: a: 2 nodes, b: 2 nodes, c: 1 arb
[19:56:10] <Derick> pointless to have other arbiters really
[19:56:21] <tsturzl> Yeah that's what I'm thinking
[19:56:37] <tsturzl> And good point I was thinking about having 1 arb in both A and B
[19:56:49] <tsturzl> but they can still vote across locations
[19:57:03] <tsturzl> I'll just up priority on site A nodes
[19:57:18] <Derick> yes, that'd make sense
[19:57:53] <tsturzl> Thanks a lot @Derick!
[19:58:05] <Derick> no prob!
[21:41:20] <sigkell> so I have a database with three collections: candidates, parties, constituencies. in the candidates collection, each document has references to another document in parties and constituencies (ObjectID). this is all fine, but I'll be exposing this as a HTTP API - what's the best way to resolve the referenced document and embed it in the result?
[21:41:39] <sigkell> I know how to do it in my application logic, but I'm thinking it might have performance issues, and I'm not using mongo properly
[21:42:25] <sigkell> so what I need to do is resolve ~400 ObjectIDs and replace them with the correct document, when someone queries the API