PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 23rd of September, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[07:13:15] <j-robert> joannac: after like 2 days I finally figured out I was setting the wrong port to connect to MongoDB. :(
[08:58:04] <livingBEEF> Is it possible to optionally bind to IP? As the failure to bind wouldn't be critical (v2.6.8)
[09:08:06] <joannac> ...optionally?
[09:08:30] <joannac> you don't have to specify bindIP
[11:53:36] <livingBEEF> I'd like to bind to localhost plus one docker interface, when it exists - but I do not want to bind to eth and wlan
[11:56:27] <livingBEEF> I don't want to have to set up firewall because of this...
[16:06:00] <DearVolt> Hey everyone :) I've got a list of entries I'm displaying in my web app's backend but it's starting to get large. I would like to implement a pagination system with both prev/next controls as well as page numbers. I have no unique fields (Besides _id of course). I have read about this a few places and see that limit/skip is not a viable option and that _id doesn't strictly represent insertion order. Any advice on how I could go abo
[16:07:09] <DearVolt> I was hoping to be able to implement various sorting options, but this doesn't look like an option with pagination anymore.
[16:12:49] <cheeser> _id_ *does* reflect creation order, though. at least, if you're using ObjectId
[16:13:02] <cheeser> of course if you sort by any field, you can throw that out the window.
[16:15:08] <DearVolt> Thanks @cheeser. The ObjectId docs say that only happens if two insertions happen concurrently, which won't be the case here. At the moment, to uniquely identify a single entry I need to use multiple fields. That makes it tricker to do the pagination, right?
[16:15:20] <DearVolt> I'll be back in a few minutes.
[16:18:46] <cheeser> it does
[16:32:42] <DearVolt> @cheeser: So should I just use _id and then limit/skip with only next/prev controls?
[16:33:55] <DearVolt> Otherwise I'll have to implement something like an A - Z pagination and have all entries starting with certain letters, but that would be pretty unbalanced. Though it would make navigation easier.
[17:08:03] <chris613> Hiya - Mongo 2.6 (I know..) I have a user defined in admin.system.users who has the readWriteAnyDatabase role. I can connect to admin and then 'use' any DB just fine. But if I specify another DB on the command line I get an auth failure. Am I simply missing a role or other option on my user, or do I really need to create the user in every DB?
[17:35:50] <chris613> Ah shoot, actually it's mongo 2.4 (I know, I know....) ^^^
[18:51:27] <FIFOd> I'm using the Node.js node-mongodb-native driver (via mongoose) and I see a connection timeout intermittently. http://pastebin.com/4pYyx7RR Any hints on how to debug this further? Based on http://mongodb.github.io/node-mongodb-native/2.1/api/Server.html I would expect the socket to not have a timeout set.
[18:52:24] <FIFOd> I'm using a pool, and it seems to always be a random connection.
[20:14:13] <AlmightyOatmeal> Would running multiple find().count() queries, against the same collection, cause any kind of lock contention or somehow impede each other's performance?
[20:16:50] <AlmightyOatmeal> the query plan involves scanning indexes so i would expect things to go relatively quickly but it seems to grind to a halt
[20:23:57] <cheeser> reads don't block reads so you should be fine
[20:24:57] <AlmightyOatmeal> cheeser: thanks.
[20:26:16] <AlmightyOatmeal> i'm trying to determine where the bottleneck is. mongodb doesn't seem to be doing much activity on my database and very little CPU/disk IO activity so i'm thinking the problem may be within my python code although it seems pretty straight forward.
[20:26:20] <AlmightyOatmeal> back to the drawing board.
[21:29:58] <AlmightyOatmeal> anubhaw: quit playing jumprope with your network cable :)
[21:54:28] <AlmightyOatmeal> Can one use an asterisk as a wildcard like % is used in a SQL LIKE statement like: find({"hostname":"cluster2*"}) or would it be more like find({"hostname":"cluster2/.*"})?
[22:31:43] <joannac> AlmightyOatmeal: regex matching?
[23:08:53] <AlmightyOatmeal> joannac: i don't need regex matching per-say, just wildcard matching with a partial value
[23:09:48] <cheeser> "se" :)
[23:10:06] <AlmightyOatmeal> in my defense, english is technically my second language o:)
[23:10:21] <cheeser> haha. fair enough.
[23:10:33] <AlmightyOatmeal> but thank you for the correction. it's been a long long week at the office :)
[23:17:36] <joannac> AlmightyOatmeal: well, regex matching is how you would do it
[23:18:57] <joannac> https://docs.mongodb.com/manual/reference/operator/query/regex/
[23:20:54] <AlmightyOatmeal> joannac: oooh, that's unfortunate. i was hoping i could do "value*" for more full-text searching with wildcards :(
[23:21:04] <AlmightyOatmeal> i heard mongodb regex queries aren't exactly blazing fast
[23:21:46] <AlmightyOatmeal> so then "<partial val>/.*" wouldn't work as it's invalid regex, silly me.
[23:23:36] <joannac> AlmightyOatmeal: did you read the page?
[23:24:08] <joannac> /value/ will match value.*
[23:26:11] <AlmightyOatmeal> joannac: i started to but work interrupted me :)
[23:26:18] <joannac> heh
[23:26:25] <joannac> alright then :p I can sympathise
[23:41:44] <AlmightyOatmeal> working from home today because i can't remove the last pits of a leaky power steering high pressure line from the bloody steering box. so i'm half-playing and half-working today.
[23:42:03] <AlmightyOatmeal> the number of macbook pro's on my coffee table is quite a pretty sight ;)