PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 5th of December, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:05:27] <Doyle> Hey. Is it advisable to introduce mongodb 3.0 RS members to a 2.6.7 RS?
[00:16:17] <Boomtime> @Doyle: https://docs.mongodb.org/manual/release-notes/3.0-upgrade/#change-replica-set-storage-engine-to-wiredtiger
[00:16:31] <Boomtime> specifically the note: "Before enabling the new WiredTiger storage engine, ensure that all replica set/sharded cluster members are running at least MongoDB version 2.6.8, and preferably version 3.0.0 or newer."
[00:16:48] <Boomtime> that's the only caveat i know of
[00:20:27] <cheeser> Doyle: that's fine
[00:22:54] <Doyle> thanks Boomtime, cheeser
[00:23:37] <Doyle> You guys are here a lot. Are you mongodb employees?
[00:26:27] <Boomtime> can't we just be really helpful...?
[00:27:23] <Doyle> Was just wondering
[00:27:36] <Doyle> It's good you're here
[01:25:14] <shortdudey123> anyone used mplotqueries? trying to group by the index used for the query, and my regex isn't working :/
[01:34:56] <leptone> how can i db.collection.find({a: 3, b: 4}, function(err, doc){blah}) such that doc is item in collection with a = 3 and b = 4?
[01:57:14] <leptone> why isn't product.description being populated with data from the .findOne() like the rest of the product object? https://gist.github.com/leptone/3751359934e104c01737#file-productroutes-js-L18
[01:59:42] <StephenLynx> probably because of mongoose.
[10:37:23] <lost_and_unfound> Greetings, I am have some difficulty with Indexing a single key with using a range selection.
[10:37:31] <lost_and_unfound> Any good reads you can suggest?
[11:15:51] <lost_and_unfound> I have attempted to add indexes based on various articles and help pages. So, I added the new index, and the queries are still extremely slow. Can anyone shed some light whether this is related to indexes / query or if this might be hardware related that the server has insuffient resources? http://pastie.org/private/kwcg1diozuq4vdkuhbrq
[11:24:26] <KUUHMU> hi anyone who know about pymongo full text search?
[11:52:17] <lost_and_unfound> doing some more reading, https://docs.mongodb.org/manual/tutorial/ensure-indexes-fit-ram/ - Is this free RAM required part of the default mongod running, or is that an additional X-GB ram it requires once loading the index to do the query?
[13:10:56] <lost_and_unfound> eh... ok, so we just changed the hardware. From 4GB to 20GB RAM, from a dual core CPU to quadcore CPU, running same query, downfrom 171542ms to 108406ms.
[13:11:28] <StephenLynx> nice.
[13:11:43] <StephenLynx> seems the CPU was the bottle neck
[13:12:31] <lost_and_unfound> This is still way to slow, its a DEV server, so It does not even have proper data.
[13:12:45] <StephenLynx> what is the query?
[13:13:49] <lost_and_unfound> StephenLynx: http://pastie.org/private/kwcg1diozuq4vdkuhbrq
[13:14:29] <lost_and_unfound> I have the query before and after indexes where created with minimal change to time
[13:15:01] <StephenLynx> use explain on it to see if you can figure something out of it.
[13:16:04] <lost_and_unfound> StephenLynx: that is what I did in the pastes provided, the first query the explain showed the index was not used, then I created the index, ran query with explain() again and it shoes the index was used.
[13:16:54] <StephenLynx> i can't see .explain() anywhere
[13:17:08] <lost_and_unfound> it takes mongo 108406ms to pull 2.8mil (2864912) records
[13:18:09] <StephenLynx> I don`t think that handling nearly 3 millions documents is trivial. and you didn't ran explain, it might not be using the index.
[13:18:19] <lost_and_unfound> http://pastie.org/private/khxbmun8461mjguq12shiw
[13:18:43] <StephenLynx> ah, its there :v
[13:18:43] <StephenLynx> nvm
[13:18:49] <lost_and_unfound> My bad on the first paste, the <pre> had the eplain way at the end, I reformated the text now
[13:19:09] <StephenLynx> i also didn't used ctrl+f v:
[13:19:33] <StephenLynx> >"nscannedObjects" : 2864912
[13:19:43] <StephenLynx> shouldn't it limit itself to only the objects that fix the index?
[13:19:51] <StephenLynx> fit*
[13:20:20] <StephenLynx> try a findOne
[13:22:21] <lost_and_unfound> I attemped a findOne({...}).explain(); but I get an error message
[13:22:34] <lost_and_unfound> TypeError: Object [object Object] has no method 'explain'
[13:22:54] <StephenLynx> indeex. explain runs on the cursor.
[13:22:59] <StephenLynx> try without the explain then
[13:23:18] <lost_and_unfound> without the explain I do get a result back
[13:23:27] <StephenLynx> but how fast is it?
[13:23:43] <lost_and_unfound> it was instant
[13:23:57] <StephenLynx> ok, so I have a hunch your find is not using the index.
[13:26:55] <lost_and_unfound> in the link, http://pastie.org/private/khxbmun8461mjguq12shiw you will see the "before" heading where I ran a find().explain() and it showed "BtreeCursor template_id_1". I then created the new index and rean the find().explain() again and to shows I am using the index "BtreeCursor template_id_1_log_date_1"
[13:27:46] <lost_and_unfound> So the index was created and used. I have also added more hardware to the server and restarted it.
[13:40:36] <StephenLynx> hm
[13:40:51] <kexmex> mongodump is horribly slow latest--- is that a known problem? is there a way to diagnose it?
[13:40:52] <StephenLynx> ok, but why is it scanning all documents?
[13:41:21] <kexmex> is it something to do with wiredTiger?
[13:42:57] <lost_and_unfound> I have done a vanilla install on FreeBSD and have not configed and settings
[13:54:39] <daedeloth> hi
[13:54:54] <daedeloth> I'm fairly new to document based databases, and I was wondering if I could run my use case here
[13:55:16] <StephenLynx> dont ask to ask, just ask.
[13:55:21] <StephenLynx> IRC unwritten rule.
[13:56:43] <daedeloth> Alright, so. I have following situation. I have a list of games. Each game has a list of of groups and a list of users. All users are assigned to one or more groups. Games itself are atomic, so a user from game A can never be in a group of game B.
[13:56:52] <daedeloth> So, I'd think game is my document here
[13:57:19] <daedeloth> BUT. Groups have a property called "unique token", which should be unique across the whole thing
[13:57:26] <StephenLynx> _id
[13:57:34] <StephenLynx> by default all your documents have one.
[13:57:40] <StephenLynx> ah
[13:57:45] <StephenLynx> groups are a sub-document.
[13:57:47] <StephenLynx> nevermind
[13:57:48] <daedeloth> indeed
[13:58:29] <StephenLynx> groups belong to only a single game?
[13:58:34] <daedeloth> yes
[13:58:59] <daedeloth> but there can be multiple groups in a game
[13:59:00] <StephenLynx> so why you need to them to have a global unique id?
[13:59:22] <daedeloth> well, that token is the way to connect to the game
[13:59:24] <StephenLynx> can't they just have something unique among all game's groups?
[13:59:32] <StephenLynx> not if they are a sub-document.
[13:59:48] <StephenLynx> you could have an array of groups in your game.
[14:00:10] <daedeloth> and make groups documents?
[14:00:17] <StephenLynx> sub-documents.
[14:00:44] <daedeloth> hm, I don't understand what you mean with unique among all game groups
[14:00:56] <StephenLynx> a game would be more or less like this {gameName:'x',groups:[{groupId:1},{groupId:2},]}
[14:01:04] <daedeloth> yep
[14:01:07] <StephenLynx> then you could have anothe game with its own list of groups
[14:01:24] <StephenLynx> but you could have groupdId:1 on this second game too
[14:01:31] <StephenLynx> because you know its a different game
[14:01:41] <daedeloth> yea but I can't work that way due to the whole token thing
[14:01:51] <StephenLynx> what token thing?
[14:02:00] <daedeloth> groups have a unique token
[14:02:10] <StephenLynx> why
[14:02:22] <daedeloth> well it's used to login, you login into a group and that group is assigned to a game
[14:02:33] <daedeloth> so you login in a group, not into a game
[14:02:36] <daedeloth> it's a bit complicated :)
[14:02:47] <StephenLynx> ok, but can't this token be gameId+groupdId?
[14:02:58] <daedeloth> no, it can't be predictable
[14:03:03] <StephenLynx> wot
[14:03:17] <StephenLynx> security by obfuscation?
[14:03:21] <daedeloth> security
[14:03:32] <StephenLynx> that is real bad security.
[14:03:42] <StephenLynx> you should authenticate permissions back-end
[14:03:48] <StephenLynx> instead of trusting people won't know it.
[14:03:56] <daedeloth> it's not supposed to be very secure :) I just need to generate an access token that is used to login as a certain group
[14:04:11] <StephenLynx> not only is not secure, but leads to bad db design.
[14:04:22] <StephenLynx> however
[14:04:27] <StephenLynx> you could have groups on a separate collection
[14:04:36] <StephenLynx> and have on each group a field referencing a game.
[14:04:49] <daedeloth> ok
[14:04:51] <StephenLynx> that way you can use the group _id field
[14:04:59] <StephenLynx> and still have a 1-n relation.
[14:05:07] <daedeloth> and a reference is basically an id , and you load the game itself manually?
[14:05:14] <StephenLynx> yes.
[14:05:26] <daedeloth> and for users? to get the n-n relation? Same idea?
[14:05:45] <StephenLynx> have an array of user's on groups.
[14:05:57] <daedeloth> yea but a user can be in multiple groups :)
[14:06:02] <StephenLynx> no problem
[14:06:13] <StephenLynx> you just store the user unique identifier
[14:06:18] <StephenLynx> and not the actual user document.
[14:06:29] <daedeloth> yep, so 3 document types then
[14:06:30] <daedeloth> got it
[14:06:34] <StephenLynx> 3 collections, yes.
[14:06:56] <StephenLynx> you could reduce to 2 if it werent for that unpredictable token.
[14:07:32] <daedeloth> yea it's a bit confusing, I'm building a quiz game where users can connect their cellphones to have local multiplayer
[14:07:47] <daedeloth> so security is not that much of an issue
[14:07:47] <StephenLynx> is not confusing, is just bad designed.
[14:08:11] <StephenLynx> but is not thaaaaaaat bad.
[14:08:44] <daedeloth> :)
[14:09:20] <StephenLynx> what I would do is to have a list of users allowed to join a game/group
[14:09:31] <StephenLynx> so the group id could be predictable.
[14:10:07] <daedeloth> well problem is that one of the ids represent the "game master", and we'd like to avoid users taking over the quiz master controller
[14:10:22] <StephenLynx> no problem.
[14:10:26] <StephenLynx> you authenticate that to.
[14:11:15] <daedeloth> ah but user doesn't represent an authenticated user; everyone who connects is automatically a user. Other than that token, there is no authentication done whatsoever
[14:11:37] <daedeloth> (well there is oauth2 but that's on a complete different level)
[14:11:49] <StephenLynx> hm.
[14:11:51] <StephenLynx> I see.
[14:12:06] <StephenLynx> welp
[14:12:24] <StephenLynx> indeed, I can see why you would like to not have an account system.
[14:12:31] <StephenLynx> and I can see the limitations that impose.
[14:13:40] <daedeloth> alright, so I think I'll figure that one out
[14:13:52] <StephenLynx> imo
[14:13:52] <daedeloth> now, any thoughts on how to keep these objects in sync across multiple nodes? :p
[14:14:17] <StephenLynx> if you really wish to keep your game free of an account system, 3 collections is the best compromise.
[14:14:31] <StephenLynx> and using the _id as group identifier.
[14:16:38] <StephenLynx> by node, what do you mean?
[14:20:24] <daedeloth> StephenLynx, a heroku instance, a server
[14:20:30] <StephenLynx> hm
[14:20:59] <StephenLynx> what I would do is to have games to run on a single server.
[14:21:09] <StephenLynx> and split games across servers.
[14:21:55] <daedeloth> hm not sure how that would work load balancing wise
[14:22:21] <StephenLynx> the other options are:
[14:23:03] <StephenLynx> 1: to have servers to not have to keep track of that in the first place, which would increase readings on the database
[14:23:17] <StephenLynx> 2: have servers to communicate between each others for these changes
[14:26:59] <daedeloth> I'd prefer to go for the second option, sounds more bullet proof. Also, I'm talking to a client who wants to use this super big scale (10k players), so I'd like to be able to spread the traffic across multiple servers
[14:27:41] <daedeloth> socket.io connections are scaled with redis; I think I could inject some commands in there to tell all servers to "update the documents with provided ids"
[14:27:54] <StephenLynx> socket.io has nothing to do with this.
[14:28:03] <daedeloth> no, I know
[14:28:12] <daedeloth> but I'm saying I already have redis in there as well
[14:28:17] <StephenLynx> and personally I wouldnt use it.
[14:28:20] <daedeloth> so I could use that to communicate between nodes
[14:28:30] <daedeloth> yea I've heard a lot of bad rep on socket.io lately
[14:36:59] <kexmex> what's the IDHACK plan?
[14:37:22] <StephenLynx> wot
[14:37:59] <kexmex> nevermind
[15:47:39] <kexmex> this is horrible
[15:47:42] <kexmex> 2015-12-05T15:42:05.574+0000 I QUERY [conn14] getmore api2.requests cursorid:27698932230 ntoreturn:0 keyUpdates:0 writeConflicts:0 numYields:114 nreturned:959 reslen:4206760 locks:{ Global: { acquireCount: { r: 230 } }, Database: { acquireCount: { r: 115 } }, Collection: { acquireCount: { r: 115 } } } 16045ms
[15:47:45] <kexmex> how do i diagnose this?
[15:48:01] <kexmex> is it the unzipping?
[15:49:46] <kexmex> i got 7GB ram on the machine-- database is like 4GB
[20:05:21] <kexmex> anyone from Mongo support awake?