PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 24th of June, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:21:46] <goldstar> how do I login as a user to a db to test whether creds work?
[01:23:07] <AbuDhar> hehehehe goldstar :D
[01:23:20] <AbuDhar> are you using any framework?
[01:23:25] <goldstar> AbuDhar: no
[01:24:00] <AbuDhar> then why do you ask?
[01:25:04] <AbuDhar> just check if the username and password match the inputted one :P
[01:25:15] <AbuDhar> ones
[01:26:50] <goldstar> AbuDhar: I can do this with other dbs, why doens't mongo offer the same functionality? Im not going to spend 20min spinning up a box just to test whether a pass works
[01:27:23] <AbuDhar> I am not sure what you mean :P
[01:27:40] <AbuDhar> how do you usually test it?
[01:37:58] <goldstar> AbuDhar: found it; http://docs.mongodb.org/manual/reference/method/db.auth/
[01:38:10] <goldstar> thanks anywaus
[01:38:15] <goldstar> anyways*
[01:38:19] <AbuDhar> hehe I am not so familiar with mongodb
[01:38:21] <AbuDhar> I am sorry :P
[01:38:29] <goldstar> np akhi ;)
[01:38:50] <AbuDhar> :) have a good fast bro
[01:39:03] <goldstar> you too
[08:38:55] <yauh> g'day - I'm inserting geojson data into MongoDB with a 2dsphere index. I noticed that even though I trimmed my coords to have only 4 decimal places the collection stores 17 places
[08:39:00] <yauh> is that by design?
[08:49:35] <yauh> I guess I am looking for a way to configure the bits for 2dsphere, I only found it for 2d index
[10:05:25] <mitereiter> hi all
[10:05:47] <mitereiter> is it possible to reactivate the localhost exception?
[10:36:01] <leporello> Hi. I have a collection with createdBy field. How can I get a list of all possible creators from it?
[11:51:04] <circ-user-u0N69> hey guys, is it possible I can specify multiple listening ports on my mongo instance?
[11:52:54] <Zelest> Error parsing command line: Multiple occurrences of option "--port"
[11:52:58] <Zelest> doesn't look like it :(
[11:55:18] <leporello> iptables -t nat -A PREROUTING -p tcp --dport 10000 -j REDIRECT --to-port 20000
[11:55:22] <leporello> try this.
[11:57:39] <Zelest> hehe
[11:57:50] <circ-user-u0N69> it's not possible? :S
[11:58:04] <circ-user-u0N69> I can't set 2 listening ports for mongo in the config?
[12:11:24] <StephenLynx> don't think so. I don't think you can't do that for any TCP based connection.
[12:21:58] <leporello> circ-user-u0N69, it's not possible, so you need to redirect all incoming connections on other ports to mongo.
[12:22:03] <leporello> Using firewall.
[12:22:24] <leporello> All thing should do their job. Network is for network.
[12:22:53] <leporello> DB is for DB. So even I should not be here :)
[12:23:51] <deathanchor> leporello: rightly said! firewall or proxy to switch up the port or allow multiple ports
[12:24:44] <leporello> deathanchor, he's gone :)
[12:29:34] <deathanchor> still wanted to show you my love for your thoughts.
[13:55:39] <vanch> Hi. What's the difference between definitions of Cluster name and Replica set?
[13:56:13] <StephenLynx> a cluster distributes data. a replica duplicates it.
[13:58:55] <vanch> StephenLynx, where can I set cluster name and replica name?
[14:08:55] <deathanchor> replSet command line option, and clusters have no names.
[14:29:42] <Cygn> {"$in":[{"regex":"Exfactory","flags":"i"} != {"$in":[/Exfactory/i]} but why?
[14:33:32] <d-snp> Hi, I'm having a write throughput problem, every 10-20 writes, an oplog query randomly takes 500-2000ms
[14:33:55] <d-snp> they look like this:
[14:34:01] <d-snp> QUERY [conn29] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1435098587000|18 } } cursorid:22303303740 ntoreturn:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:293 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } 1341ms
[14:34:09] <d-snp> anyone know what could be the cause of that?
[14:52:48] <Cygn> {"$in":[{"regex":"Exfactory","flags":"i"} != {"$in":[/Exfactory/i]} but why?
[14:59:39] <jmeister> does anyone know if I call collMod to change an existing TTL index, does it rebuild the entire index or is it safe to run on a production environment?
[15:10:06] <jmeister> Bump - does anyone know if I call collMod to change an existing TTL index, does it rebuild the entire index or is it safe to run on a production environment?
[16:09:06] <muhammadn> any ruby experts here?
[16:09:33] <muhammadn> I have trouble benchmarking ruby code (not ruby on rails) to see how fast it can insert data.
[16:09:39] <muhammadn> gist here: https://gist.github.com/apeiros/f78b5e30d3486b258957
[16:10:23] <muhammadn> oops sorry that is not the gist
[16:10:44] <muhammadn> correct gist is here for mongodb benchmark code in ruby: https://gist.github.com/muhammadn/b16a6b5d8396a3ee8283
[16:27:21] <android6011> does anyone have a schema they use for handling global address data?
[17:32:57] <deathanchor> anyone know if you can run profiler via a mongos?
[18:06:39] <deathanchor> ooo... is there an easy way to know which shard has a specific key without using sh.status({ verbose :1 })?
[18:07:23] <cheeser> out of curiosity, why would it matter?
[18:07:26] <deathanchor> ah, explain gives that info
[18:08:03] <deathanchor> just trying to find the right shard without scraping through sh.status() which cna be big
[18:08:08] <darius93> I have a question, what are the chances of there being two of the same uuid within a collection? I do slightly rely on the uuid but may just move away from that and into my own id system
[18:08:27] <deathanchor> uuid or ObjectId?
[18:08:39] <darius93> ObjectId
[18:09:08] <deathanchor> ObjectId has a time bases
[18:09:27] <deathanchor> darius93: http://docs.mongodb.org/manual/reference/object-id/
[18:09:43] <deathanchor> if you use uuid, your better off with that for uniqueness
[18:10:06] <cheeser> i disagree
[18:10:26] <cheeser> i don't know of collision issues with ObjectIds
[18:14:07] <darius93> cheeser, i was just wondering in case I should generate my own ID system
[18:14:13] <cheeser> ugh
[18:14:53] <cheeser> i'd need a hard business case for not using ObjectId
[18:14:57] <deathanchor> darius93: there are already several ways to generate uniq ids, don't make your own
[18:15:11] <deathanchor> uuidgen for use with anything
[18:15:32] <deathanchor> but I would trust mongos ObjectId's
[18:17:35] <darius93> lol deathanchor i meant for my internal use. I dont mean generating uuid itself though if i had to make my own
[18:21:56] <deathanchor> darius93: yeah we use a collection just to get a new objectId for our internal ids
[18:22:15] <deathanchor> we use uuid for crap that can't talk to the db.
[18:23:51] <cheeser> you do what now?
[18:24:51] <deathanchor> cheeser: we insure it is uniq by keep the collection around :D
[18:25:04] <cheeser> ObjectIds are already guaranteed unique.
[18:25:19] <deathanchor> devs do dumb things...
[18:26:04] <cheeser> be the hero. fix it!
[19:24:53] <whaley> darius93: +1 on ObjectId also. It has some other nice properties besides uniqueness - for instance, it contains a timestamp as a property, which completely obviates the need for a "created" field on your document
[19:25:54] <whaley> granted, that's in seconds and not millis... and probably won't work after the year 2038, but it's still a nice little value add
[19:29:52] <StephenLynx> :v
[19:43:37] <cheeser> we'll all be dead by 2038 anyway
[19:43:58] <darius93> cheeser, didnt know you could see into the future :P
[19:47:54] <cheeser> darius93: yeah ... that's it. i'm prescient. *nurses evil schemes*
[19:55:12] <deathanchor> cheeser cheated.. it's already all here: https://en.wikipedia.org/wiki/2038
[19:59:55] <cheeser> you know about the 2038 problem right?
[20:00:13] <deathanchor> It's already listed there...
[20:00:35] <cheeser> right. because it's been known about since the 70s :D
[20:16:59] <devSeb> Hey mongo-folks! =) Learning mongodb since a while, with Node.js. Now I created a function using MongoClient that can drop a DB using db.dropDatabase() (not from shell, from a node.js app). I'm failing at creating a function that creates a new DB though. Any pointers on this? I can't find docs on using copyDatabase() except from the shell.
[20:20:35] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/Db.html
[20:21:02] <StephenLynx> ah
[20:21:04] <StephenLynx> you mean
[20:21:15] <StephenLynx> creating a database that doesn't exist yest?
[20:21:19] <StephenLynx> mongo does that by itself.
[20:21:25] <StephenLynx> just open a connection to the database
[20:27:24] <cheeser> well, you have to write to it first. or run "show collections" (the listCollections command at least)
[20:32:25] <devSeb> StephenLynx: Exactly that, is what i mean. I know mongo works like that from the shell - will have a go for it!
[20:33:09] <StephenLynx> working with mongo using the node.js driver is pretty much the same as working with the terminal.
[20:33:21] <StephenLynx> except for callbacks and how you pass arguments to some functions.
[20:34:35] <devSeb> StephenLynx: I see... But there's no copyDatabase function I can access, then? Like in the shell with db.copyDatabase. I have to read and write manually?
[20:34:50] <StephenLynx> hm, let me see
[20:35:11] <StephenLynx> yeah, doesn't seem so.
[20:35:24] <devSeb> Doh!
[20:35:36] <StephenLynx> you could use eval though
[20:35:38] <StephenLynx> i guess
[20:35:41] <devSeb> Ok then, just wanted to check before i start doing it manually
[20:35:48] <devSeb> ... like how?
[20:35:55] <devSeb> aaa
[20:35:56] <StephenLynx> http://mongodb.github.io/node-mongodb-native/2.0/api/Db.html#eval
[20:35:58] <StephenLynx> never used it though
[20:36:15] <devSeb> ok anyhing is helpful, so thanks!
[20:36:18] <StephenLynx> never had any need to use commands on an application that weren't implement.
[20:36:29] <StephenLynx> I assume most people wouldn't copy databases regularly.
[20:38:08] <devSeb> No... its just me trying to learn mongodb better. I'm trying to "reset" my dev db before unit tests... But maybe there's some smarter way of doing that?
[20:38:29] <StephenLynx> I wouldn't know, I don't even write tests.
[20:38:37] <StephenLynx> ¯\_(ツ)_/¯
[20:38:52] <devSeb> By reset i mean: drop db1, copy db2 -> db1
[20:39:00] <StephenLynx> yeah, I get it.
[20:39:09] <StephenLynx> I would write a bash script, if I were to do that.
[20:39:37] <devSeb> Ok, or really the tests doesn't matter, but im trying to do this to have a fresh db on every build...
[20:39:41] <StephenLynx> I know you can pass commands without running the terminal beforehand
[20:39:49] <StephenLynx> I just drop the database.
[20:39:53] <devSeb> using mongodb shell?
[20:39:53] <StephenLynx> personally
[20:39:56] <StephenLynx> yes
[20:40:03] <devSeb> hummmm.... :)
[20:40:19] <StephenLynx> my applications check for indexes on boot
[20:40:22] <devSeb> good idea, at least for learning
[20:40:28] <devSeb> maybe not for prod
[20:40:46] <StephenLynx> and this one checks and generates stuff that's missing
[20:40:48] <devSeb> what indexes you mean?
[20:40:53] <StephenLynx> collection indexes.
[20:41:24] <devSeb> yeah, could just insert my fake data in the code, as objects too...
[20:41:42] <StephenLynx> that would only be done for tests.
[20:41:45] <devSeb> only for dev
[20:41:49] <StephenLynx> or that.
[20:42:01] <StephenLynx> what I am talking about is just ensuring the indexes your application use
[20:42:11] <StephenLynx> https://gitlab.com/mrseth/LynxChan/blob/master/src/be/db.js
[20:42:36] <StephenLynx> check line 242
[20:42:41] <devSeb> StephenLynx: Ok gonna read the Index intro right now...
[20:48:10] <devSeb> StephenLynx: thanks man, really appreciate it! Looking through more of your code too... And gonna read up on indexes now. bbl =)
[20:48:27] <macwinner> will an index on an attribute like an md5 hash have a significantly different performance or memory usage characteristic vs an index on ObjectId?
[20:48:30] <devSeb> *trough
[20:49:08] <devSeb> nm haha damnit
[21:03:53] <greyTEO> I know this goes against mongo but what is the best way to compare 2 collections?
[21:04:33] <greyTEO> In sql I could reverse join and get the difference.
[21:39:31] <macwinner> are these redundant indexes: userid_1 and testid_1_userid_1
[21:40:11] <macwinner> ie, if I drop userid_1 will anything that depended on it be optimized with testid_1_userid_1 ? or does the ordering matter?
[22:00:49] <kba> greyTEO: Do you want to get the intersection?
[22:27:46] <greyTEO> kba, I want to get the opposite
[22:41:40] <fxmulder> so I've added a new replica set member and I noticed it's created more data files for the database than on the replicas its replicating from, why might that be?
[22:48:52] <greyTEO> GothAlice, is the processId maditory for marrow task?
[23:14:14] <joannac> fxmulder: did you run the new member with smallfiles?
[23:16:30] <d-snp> ah I keep missing the cool people, or they dont think my question is cool enough :(
[23:16:48] <d-snp> anyone know why oplog queries like this could be slow? QUERY [conn29] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1435098587000|18 } } cursorid:22303303740 ntoreturn:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:293 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } 1341ms
[23:23:27] <joannac> d-snp: other things happening at the same time?
[23:24:30] <d-snp> well could be, what sorts of things, you mean over the network?
[23:25:02] <d-snp> it's every ~10-20 writes an oplog query like that happens and takes 500-2000ms
[23:25:36] <joannac> is it from a secondary?
[23:26:25] <joannac> the connection, i mean
[23:41:58] <d-snp> oh sorry I'm on windows, my terminal sometime freezes :P
[23:42:01] <d-snp> good question
[23:42:39] <d-snp> would an oplog query from a secondary block writes until the oplog query result is sent back?
[23:44:04] <joannac> it shouldn't
[23:44:21] <joannac> but there's no index on the oplog
[23:45:20] <joannac> and I don't think internally there are queries like that
[23:45:34] <joannac> which is why I'm suspicious that it's your app, or someone checking something
[23:47:05] <d-snp> oh
[23:47:31] <d-snp> well we only interface to it using the official ruby driver
[23:47:35] <d-snp> perhaps an outdated one
[23:47:51] <d-snp> hmm and maybe some monitoring
[23:48:33] <d-snp> it could be the oplog queries have nothing to do with our performance problems, they just look a bit suspicious, taking so long
[23:49:04] <fxmulder> joannac: filesize appears to be the same, 2GB files
[23:49:06] <d-snp> we collect a bunch of mongodb performance metrics, could they cause oplog queries?
[23:50:17] <d-snp> oh
[23:50:19] <d-snp> http://stackoverflow.com/questions/6118411/why-do-my-mongodb-logs-fill-up-with-getmore-local-oplog-rs
[23:50:22] <d-snp> just saw that
[23:50:24] <fxmulder> comparing the count from collection.stats() on the primary and the 'objects cloned so far from collection' count in the new replica's logs, this replica is going to be using a lot more disk space than the other replicas
[23:50:40] <d-snp> apparently its normal for them to take long and they should happen as part of normal things
[23:50:53] <fxmulder> I'm only about 77% done
[23:53:38] <d-snp> alright, so it was a red herring, its got nothing to do with our performance :)
[23:54:34] <d-snp> fxmulder: that sounds like some configuration is different
[23:54:46] <d-snp> is it a different mongo version? perhaps some flags?
[23:54:53] <fxmulder> same version same config
[23:55:11] <d-snp> hm, and storage, filesystem?
[23:55:28] <fxmulder> same filesystem
[23:55:40] <fxmulder> it should be setup identically to the other replicas
[23:55:55] <d-snp> weird
[23:56:01] <d-snp> did you run a compaction on the originals?
[23:56:12] <fxmulder> I didn't
[23:57:12] <d-snp> are you using wiredtiger? with index compression and such?
[23:58:01] <d-snp> if it's really a problem for you, and you can afford some downtime, you can also just restore using rsync
[23:58:18] <fxmulder> rsync is what I've been trying to get away from
[23:58:25] <d-snp> ok
[23:59:09] <d-snp> I'm not 100% certain on what kind of live compaction mongodb does, but it could be that if you run a compact after the restoration is done they'll be closer together