[00:16:31] <Boomtime> specifically the note: "Before enabling the new WiredTiger storage engine, ensure that all replica set/sharded cluster members are running at least MongoDB version 2.6.8, and preferably version 3.0.0 or newer."
[00:16:48] <Boomtime> that's the only caveat i know of
[01:25:14] <shortdudey123> anyone used mplotqueries? trying to group by the index used for the query, and my regex isn't working :/
[01:34:56] <leptone> how can i db.collection.find({a: 3, b: 4}, function(err, doc){blah}) such that doc is item in collection with a = 3 and b = 4?
[01:57:14] <leptone> why isn't product.description being populated with data from the .findOne() like the rest of the product object? https://gist.github.com/leptone/3751359934e104c01737#file-productroutes-js-L18
[01:59:42] <StephenLynx> probably because of mongoose.
[10:37:23] <lost_and_unfound> Greetings, I am have some difficulty with Indexing a single key with using a range selection.
[10:37:31] <lost_and_unfound> Any good reads you can suggest?
[11:15:51] <lost_and_unfound> I have attempted to add indexes based on various articles and help pages. So, I added the new index, and the queries are still extremely slow. Can anyone shed some light whether this is related to indexes / query or if this might be hardware related that the server has insuffient resources? http://pastie.org/private/kwcg1diozuq4vdkuhbrq
[11:24:26] <KUUHMU> hi anyone who know about pymongo full text search?
[11:52:17] <lost_and_unfound> doing some more reading, https://docs.mongodb.org/manual/tutorial/ensure-indexes-fit-ram/ - Is this free RAM required part of the default mongod running, or is that an additional X-GB ram it requires once loading the index to do the query?
[13:10:56] <lost_and_unfound> eh... ok, so we just changed the hardware. From 4GB to 20GB RAM, from a dual core CPU to quadcore CPU, running same query, downfrom 171542ms to 108406ms.
[13:14:29] <lost_and_unfound> I have the query before and after indexes where created with minimal change to time
[13:15:01] <StephenLynx> use explain on it to see if you can figure something out of it.
[13:16:04] <lost_and_unfound> StephenLynx: that is what I did in the pastes provided, the first query the explain showed the index was not used, then I created the index, ran query with explain() again and it shoes the index was used.
[13:16:54] <StephenLynx> i can't see .explain() anywhere
[13:17:08] <lost_and_unfound> it takes mongo 108406ms to pull 2.8mil (2864912) records
[13:18:09] <StephenLynx> I don`t think that handling nearly 3 millions documents is trivial. and you didn't ran explain, it might not be using the index.
[13:23:57] <StephenLynx> ok, so I have a hunch your find is not using the index.
[13:26:55] <lost_and_unfound> in the link, http://pastie.org/private/khxbmun8461mjguq12shiw you will see the "before" heading where I ran a find().explain() and it showed "BtreeCursor template_id_1". I then created the new index and rean the find().explain() again and to shows I am using the index "BtreeCursor template_id_1_log_date_1"
[13:27:46] <lost_and_unfound> So the index was created and used. I have also added more hardware to the server and restarted it.
[13:56:43] <daedeloth> Alright, so. I have following situation. I have a list of games. Each game has a list of of groups and a list of users. All users are assigned to one or more groups. Games itself are atomic, so a user from game A can never be in a group of game B.
[13:56:52] <daedeloth> So, I'd think game is my document here
[13:57:19] <daedeloth> BUT. Groups have a property called "unique token", which should be unique across the whole thing
[14:09:20] <StephenLynx> what I would do is to have a list of users allowed to join a game/group
[14:09:31] <StephenLynx> so the group id could be predictable.
[14:10:07] <daedeloth> well problem is that one of the ids represent the "game master", and we'd like to avoid users taking over the quiz master controller
[14:10:26] <StephenLynx> you authenticate that to.
[14:11:15] <daedeloth> ah but user doesn't represent an authenticated user; everyone who connects is automatically a user. Other than that token, there is no authentication done whatsoever
[14:11:37] <daedeloth> (well there is oauth2 but that's on a complete different level)
[14:23:03] <StephenLynx> 1: to have servers to not have to keep track of that in the first place, which would increase readings on the database
[14:23:17] <StephenLynx> 2: have servers to communicate between each others for these changes
[14:26:59] <daedeloth> I'd prefer to go for the second option, sounds more bullet proof. Also, I'm talking to a client who wants to use this super big scale (10k players), so I'd like to be able to spread the traffic across multiple servers
[14:27:41] <daedeloth> socket.io connections are scaled with redis; I think I could inject some commands in there to tell all servers to "update the documents with provided ids"
[14:27:54] <StephenLynx> socket.io has nothing to do with this.