PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 27th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:20:59] <bakhtiyor> anyone there? can help with aggregation query?
[03:22:06] <Boomtime> @bakhtiyor: hi there, ask your question - if an agg is not working as you expect, put your current attempts in a pastebin or equivalent with explanation of what you expect
[03:22:31] <Boomtime> i can't guarantee anyone will know, but you'll need to do this step before anyone can try
[03:27:59] <bakhtiyor> Boomtime: hi
[03:27:59] <bakhtiyor> https://gist.github.com/hbakhtiyor/256842aa9d9cf66c48401c5f058cef54
[03:28:00] <bakhtiyor> need to cross check by "key" field on the same collection, means the same "value" exists on "a" and "b" key
[03:47:13] <bakhtiyor> is it possible with aggregation without any script?
[07:00:48] <fleetfox> Hello. I'm archiving with mongodump --quiet --archive > file.archive and getting Failed: corruption found in archive; ParserConsumer.BodyBSON() ( Document is corrupted )
[07:00:52] <fleetfox> on restore
[07:04:49] <fleetfox> nvm, pseudotty
[08:16:31] <ams_> I see lots of "COMMMAND:" in my mongodb logs. Is that normal? a (quite busy) MongoDB instance has produced 8gb of logs in 2 days
[08:53:47] <ams_> 2016-04-22T12:44:31.032+0000 I COMMAND [conn20] query xxx_fed.data query: { _id: "xxxx" } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 writeConflicts:0 numYields:1 nreturned:1 reslen:3414 locks:{ Global: { acquireCount: { r: 4 } }, MMAPV1Journal: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 2 } },
[08:53:48] <ams_> Collection: { acquireCount: { R: 2 }, acquireWaitCount: { R: 1 }, timeAcquiringMicros: { R: 322651 } } } 526ms
[08:53:52] <ams_> For example ^
[08:58:37] <kurushiyama> ams_: These are "slow query logs" and you should analyze them
[08:59:39] <ams_> kurushiyama: Aaah! Ok that makes sense. Bit confusing that they're logged at info with no message.
[08:59:40] <ams_> OK cheers
[09:03:28] <Derick> the hint is in the "526ms"
[09:03:32] <Derick> that's way too much
[09:07:07] <ams_> Derick: Yeah that's actually a log line from when our server didn't have enough memory. But we've got a bunch more, these looks like someone didn't add an index
[09:07:25] <Derick> yeah
[09:08:31] <kurushiyama> ams_: No news are good news. If a query makes it to the logs, that's bad news. My guess would have been disk IO, though, as per timeAcquiringMicros.
[09:16:38] <ams_> yeah it's a bit confusing because when you log lines that were too long to log it gets upgraded to a W and has an explicit message, I did notice "log slow queries" config item but assumed it would do the same. Not a big deal, though.
[12:02:16] <mroman> I assume that things can go wrong if you allow that a user can upload a json document?
[12:16:07] <lipiec> Is there a way to setup mongos to read only from hidden replica members?
[12:40:57] <mroman> The use-case is roughly that users can upload json documents which are then stored in the mongodb
[12:41:45] <kurushiyama> lipiec: Hidden members are not exposed and you can only connect directly.
[12:41:47] <mroman> i.e. a user uploads {"x":33,"y":38, "info" : { "r" : 99, "kind" : "CT" }}
[12:42:06] <kurushiyama> mroman: Unchecked input is the best way to get you into trouble.
[12:42:09] <mroman> which is converted to BSON and insetred.
[12:42:44] <kurushiyama> mroman: The problem is not what problems you can think of, but the problems you do not think of.
[12:42:53] <mroman> well yeah
[12:42:57] <kurushiyama> mroman: Which, by definition, is always possible.
[12:43:18] <mroman> how do you store documents in mongodb then :)
[12:43:51] <mroman> storing them as a string would be bad because then you can't really search anything anymore
[12:44:42] <mroman> to make queries later for i.e. "info.r" > 75
[14:27:57] <cpama> hi all
[14:28:20] <cpama> I have question about the new mongodb driver for php
[14:28:30] <cpama> trying to query data like this:
[14:28:38] <cpama> / Construct the MongoDB Manager
[14:28:38] <cpama> $manager = new MongoDB\Driver\Manager("mongodb://localhost:27017");
[14:28:45] <Derick> cpama: please use a pastebin
[14:28:50] <cpama> ah ok
[14:28:51] <cpama> right
[14:28:52] <cpama> thanks.
[14:31:50] <cpama> ok guys. so here's the code, error and the reference material i was using to try to build a query:
[14:31:51] <cpama> http://pastebin.com/M96kiCGZ
[14:33:50] <Derick> MongoDB\Client comes from the PHP library - have you installed that (through composer)? You also need to require it's libraries, as is shown at: http://mongodb.github.io/mongo-php-library/getting-started/
[14:34:23] <cpama> yes Derick i believe i have. but i will double check. checking phpinfo...
[14:34:26] <cpama> and also php.ini
[14:34:33] <Derick> you can use *either* \MongoDb\Driver\Manager for low level stuff, or, \MongoDB\Client for a nicer API (recommended). The latter requires the MongoDB Library
[14:34:46] <Derick> cpama: phpinfo() will not tell you about PHP libraries installed through composer, only PHP extensions
[14:35:01] <cpama> oh... i see.
[14:35:28] <cpama> my sys admin set up this box for us. and I was just "told" it had php 7 and latest mongodb driver.
[14:36:05] <Derick> cpama: sure, and you should be able to use composer yourself to install the MongoDB Library for PHP, as is described at http://mongodb.github.io/mongo-php-library/getting-started/
[14:36:56] <cpama> hm. This server is running alpine linux. Will check it out
[14:36:57] <cpama> thanks.
[14:37:07] <Derick> as long as you have a shell it should work
[14:37:26] <Derick> you can get composer through https://getcomposer.org/download/
[14:37:43] <cpama> ok.
[14:37:47] <cpama> i'll give it a try
[16:35:07] <spuz> hello, does mongodb have recommended tool for scripting maintenance tasks e.g. adding indexes ?
[16:36:12] <diegoaguilar> not that I know spuz
[16:59:19] <dgaff> Hey all - quick question about indexes - if I query a collection of a billion documents using four separate indexes in the query (e.g. where field1,field2,field3,field4), is it possible to get a speedup by concatenating the fields and making a single index (e.g. where field5, and field5 is just field1,field2,field3,field4 put together)
[16:59:35] <Derick> no
[17:00:25] <dgaff> @Derick - are they basically equivalent then?
[17:00:30] <Derick> yeah
[17:00:36] <kurushiyama> spuz: What maintenance tasks?
[17:00:43] <dgaff> gotcha
[17:00:58] <Derick> a compound index would also allow to use the index for any prefix ... a concatenated wouldn't
[17:00:59] <kurushiyama> spuz: Adding indices is not exactly what I would call "maintenance".
[17:01:43] <kurushiyama> It has to be said that order matters for index prefixing.
[17:01:59] <dgaff> @Derick - mm. I'm doing an or query on 1000 pairs of of those fields, and the performance blows
[17:02:18] <Derick> pairs?
[17:02:36] <Derick> oh, sorry
[17:02:38] <dgaff> not pairs, sets of them
[17:02:44] <Derick> I thought you had a compound index of 5 fields
[17:02:52] <Derick> not 5 indexes
[17:02:57] <kurushiyama> dgaff: Like db.foo.find({bar:{$in:[1kvalues]}})?
[17:03:08] <dgaff> like, {field1: x, field2: y, field3: g, field4: z}
[17:03:12] <dgaff> 1000 of those
[17:03:14] <Derick> is that your index?
[17:03:27] <Derick> or do you have 4 indexes, one for field1, one for field2, etc?
[17:03:33] <dgaff> where field1,field2, field3, and field4 are indexed together
[17:03:43] <Derick> dgaff: what do you mean with "indexed together" ?
[17:04:02] <Derick> do you have *one* index with 4 fields, or *four* indexes with one field?
[17:04:16] <rbpd5015> hello
[17:04:19] <rbpd5015> question guys
[17:04:19] <dgaff> https://gist.github.com/DGaffney/2e8f136df9a33bc373ba4c29161e89b7
[17:04:32] <dgaff> @Derick ^
[17:04:38] <dgaff> its the latter
[17:04:43] <dgaff> errr the former****
[17:04:46] <Derick> ok :D
[17:05:07] <rbpd5015> Is there anyway I can avoid using aggration by doing it in the update?
[17:05:11] <dgaff> is an $or operator worse than an $in for that case?
[17:05:18] <rbpd5015> Aggragation is killing my cpu
[17:05:27] <kurushiyama> dgaff: Still I do not understand completely. You are searching for 1k combinations of field1-4 in a single query?
[17:05:33] <Derick> so, with that index, it will be used if you query compares with parent_author alone, parent_author+reply_author, parent_author+reply_author+subreddit, or parent_author+reply_author+subreddit+time
[17:05:48] <Derick> not for for example, just reply_author, or reply_author+subreddit+time
[17:05:56] <dgaff> yes yes @Derick
[17:06:08] <dgaff> kurushiyama: correct
[17:06:16] <kurushiyama> dgaff: Uhm...
[17:06:25] <Derick> that won't work well
[17:06:39] <Derick> $in works like $or there really
[17:06:43] <dgaff> @Derick yeah, the last few months have shown me this
[17:06:45] <kurushiyama> So basically you are doing a SELECT * FROM foo WHERE, with WHERE having 1k clauses?
[17:06:51] <dgaff> kurushiyama: yes
[17:07:03] <Derick> dgaff: why are you doing this? what's the result you want to get?
[17:07:34] <kurushiyama> rbpd5015: Could you explain with an example? Please pastebin it.
[17:09:27] <dgaff> @Derick it's a long sad story. Right now, I have 300k sets, each of 1,000 ids, each corresponding to a comment on reddit, stored in a separate collection. For each 300k, I have to look up the 1,000 comments, look up the potentially up to 1,000 comments they were a reply to, and then I now have 0-1,000 objects that look like {parent_author: "abc", reply_author: "xyz", subreddit: "blah", time: TimeObject}
[17:09:41] <rbpd5015> kurishiyma
[17:09:41] <rbpd5015> http://s32.postimg.org/pq4twt6d1/fantasy.png
[17:09:47] <rbpd5015> can you look at this please
[17:10:15] <Derick> dgaff: is that for analysis?
[17:10:19] <dgaff> @Derick - yes
[17:10:20] <rbpd5015> I am getting updates to players stats every second via kafka and the players are being updated just fione
[17:10:33] <dgaff> @Derick - the output is a graph of all interactions on reddit ever
[17:10:50] <rbpd5015> the problem i am having is the aggration of the summing all players fantasy points to the root level lineup fantasy points is killing our system
[17:10:50] <Derick> dgaff: sounds like you'd want a graph database instead? :-)
[17:11:10] <dgaff> I was down to 4k jobs, and then I had a brownout at my house, which fried my hard drive, and now I'm back to a backup of those tasks from two months ago
[17:11:16] <dgaff> :| :| :|
[17:11:30] <rbpd5015> kurushiyama - is there a way to update the root level sum of all players fantasy points without having to do aggragation
[17:11:32] <rbpd5015> ?
[17:11:51] <dgaff> @Derick - I looked into neo and cassandra and even sql/postgres for a few weeks before I went with mongo for this - none of them were really that great for this job
[17:12:21] <dgaff> I may move subsets of the graph into things like that once the full list of edges are finished though
[17:12:37] <rbpd5015> the document I showed a screenshot of is a fantasy sports lineup that consists of 9 players
[17:12:59] <rbpd5015> each players has fantasy points which need to total in the root of the lineup after every kafka update
[17:12:59] <dgaff> but either way - how would you optimize the chain of queries that have to happen?
[17:13:20] <rbpd5015> I was wondering how we can do it in the mongodb update statement which is faster than aggragation
[17:13:25] <Derick> dgaff: i don't think you can
[17:13:34] <rbpd5015> any help from anyone would be great
[17:13:40] <dgaff> oof
[17:17:11] <awal> Hello
[17:18:25] <kurushiyama> rbpd5015: That does not help me much. Sample docs, expected output and what you want to do...
[17:19:14] <rbpd5015> sorry I am looking for a high level answer I am just PO, my devs are stuck.
[17:19:55] <rbpd5015> Just they are saying they can aggragte and update at same time and the for loop is killing them after thy update each player stat. I am just trying to come up with a solution for them to research more
[17:20:31] <awal> New to mongo here (but not programming). I can't find any strong opinions about mongoose on the web. It seems to be forcing the schema thing on me. I am not feeling comfortable with it. And all the "starter" apps for mongo and passport use mongoose only. I am a bit overwhelmed. Where should I go? use mongoose or not? if not, then how where can I get some good examples of usage of mongodb-native driver with other libs?
[17:20:54] <Derick> awal: always with a new technology, use the raw things first
[17:21:05] <Derick> only that teaches you how it actually works.
[17:21:18] <Derick> If you're comfortable with it, perhaps consider abstraction layers.
[17:21:33] <awal> Derick: right, that's what I want to do. Except I can't find good example code :( I learn from examples best.
[17:22:13] <Derick> awal: https://mongodb.github.io/node-mongodb-native/ are the docs
[17:22:18] <Derick> but I realise that's not an example
[17:23:03] <awal> yeah... docs tell me how to do one very specific thing for every specific thing that can be done. but they don't show me the high-level view of things.
[17:23:35] <Derick> i've asked the author, but seems he's already back to life-mode for today
[17:24:00] <awal> I have used many other nosql and sql databases comfortably, but I am failing at starting with mongo :(
[17:24:11] <awal> Derick: thanks
[17:24:13] <Derick> what is the thing you're not getting?
[17:27:00] <awal> Derick: for instance, we need a database of dynamic collections (a new collection is generated for some data of every user). we also keep a collection of all users in a separate "users" collection which just contains minimal information. mongoose doesn't let me do dynamic collections (or schemas since it maps collections to schemas) (or it does but no examples on how to go about structuring such a thing)
[17:29:18] <awal> and I'd also like to integrate passport with mongo, but literally every example for "passport mongo" is actually "passport mongoose". It is a bit irritating :/
[17:30:37] <Derick> i don't really know what passport is
[17:31:10] <awal> passport is the battle tested user authentication middleware for nodejs/express apps.
[17:31:29] <awal> "the" because there is no other alternative :/
[17:44:54] <hipy> I wanted to know if there's a way in MongoDB to implement customized authentication mechanism
[17:57:19] <kurushiyama> hipy: SASL not good enough for you?
[18:10:54] <hipy> basically we already have a user authentication management system (webserver) where all users, their credentials, roles etc. are registered. If we use MongoDBs auth mechanism we'll have to maintain same set of users in MongoDB as well.
[18:11:47] <hipy> ...with preferably same credentials as in our main user management system.
[18:12:25] <hipy> We want to avoid this duplication. So we want sort of SSO/single login type of arrangement.
[18:14:00] <hipy> Hence I was thinking if they have allow implementation of custom authentication mechanism then I can just forward/relay the user credentials to our main user management system and get back authentication response.
[18:14:01] <kurushiyama> hipy: SASL?
[18:14:13] <kurushiyama> hipy: Still: SASL?
[18:14:58] <hipy> ok...reading about it....I don't very good security background
[18:16:06] <kurushiyama> Well, if you are not good with security, you are probably not the best choice to implement a custom auth mechanism?
[18:17:08] <hipy> kurushiyama: can you please share link to MongoDB documentation about what you are referring to.
[18:17:59] <kurushiyama> hipy: It is with LDAP, but you should get the picture: https://docs.mongodb.org/manual/tutorial/configure-ldap-sasl-openldap/
[18:35:12] <hipy> kurushiyama: Here's my understanding about "Authenticate Using SASL and LDAP with OpenLDAP". MongoDB stores username (not password). It'll proxy authentication requests to LDAP server. Currently in our setup there's no LDAP.
[18:35:36] <kurushiyama> saslauthd can be connected to various sources.
[18:36:31] <kurushiyama> hipy: For example PAM, which in turn can auth against... whatever.
[18:37:40] <hipy> Forgot to mention that everything's on Windows
[18:39:17] <kurushiyama> hipy: Well, running MongoDB on Windows is not exactly what I would call a good choice. The Windows version does not provide SASL anyway, iirc. Given these constraints, the answer to your original question is "Not that I am aware of".
[18:49:24] <hipy> I see. Thanks for the pointers kurushiyama. Atleast I have a better understanding of various keywords we discussed.
[18:50:22] <kurushiyama> hipy: http://www.principledtechnologies.com/Red%20Hat/RHEL6_IO_0613.pdf
[18:52:26] <kurushiyama> hipy: the findings in the above link are my first and foremost reason to think it is not the best idea to run MongoDB on Windows.
[18:53:29] <hipy> I see.
[18:58:20] <Zelest> If I have a replicaset running 2.6.x .. will I be able to add a secondary member running 3.2.x?
[18:58:33] <Zelest> (the idea is to migrate the old cluster to a new one, without downtime)
[18:58:59] <Derick> you need to go through 3.0 first
[18:59:26] <Derick> but yes, you can mix and match versions as long as they're one version apart
[18:59:34] <Zelest> ah
[19:00:16] <Zelest> but can I have them connected all at once? that is, [2.6.x]---[3.0]---[3.2] ?
[19:00:31] <Zelest> if I configure the 3.2 node to connect to 3.0
[19:00:57] <kurushiyama> Zelest: I wouldn't. Trying to be smart and placing a bet on your production data sounds like a bad idea.
[19:01:07] <Zelest> hmms..
[19:01:12] <Zelest> trying to figure out what other options I have :/
[19:01:31] <kurushiyama> Zelest: Simple. Update the replset to 3.0. If done, update to 3.2
[19:01:44] <Zelest> the nodes are running on FreeBSD
[19:02:02] <Zelest> 2.6 is the last version ported
[19:02:17] <kurushiyama> Zelest: OMG! Then, of course, everything is different! Except... it isn't ;)
[19:02:25] <Zelest> ok?
[19:03:03] <Zelest> oh, 3.2 is available :O
[19:03:06] <Zelest> but not 3.0 .. dafuq
[19:03:26] <kurushiyama> Thanks to the port manager ;)
[19:03:47] <Zelest> so...
[19:03:54] <Derick> kurushiyama: is that you?
[19:04:12] <Zelest> if I setup a new cluster running 3.0...
[19:04:15] <Zelest> migrate everything to that
[19:04:22] <Derick> Zelest: you can't mix 2.6, 3.0 and 3.2
[19:04:25] <Zelest> the upgrade the old nodes to 3.2... and migrate back :D
[19:04:27] <Derick> only 2.6/3.0 or 3.0/3.2.
[19:04:30] <Zelest> ah
[19:05:25] <kurushiyama> Derick: No. It would be a bit rude knowing that people have to migrate to 3.0 first and then skipping that version in the ports and jump directly to 3.2, no?
[19:05:54] <Derick> yes, rude!
[19:06:06] <kurushiyama> Derick: Ups. Is it you?
[19:06:13] <Derick> certainly not
[19:06:15] <Derick> :)
[19:07:26] <kurushiyama> tbh, kurushiyama is Japanese for "The distance between one dropped brick and the next".
[19:10:23] <Zelest> Ugh, this make it all a lot harder :(
[19:10:37] <Zelest> I can't just mongodump it and import it?
[19:10:55] <Zelest> or does that also require each step?
[19:11:46] <kurushiyama> I honestly dont know, since I rarely use mongodump (basically only on config servers)
[19:13:04] <kurushiyama> Zelest: But as written, I would not place a bet on my production data.
[19:13:14] <Zelest> yeah
[19:13:42] <Zelest> I think i've done a mongodump and mongorestore a while ago.. from 2.6 to 3.2.. without any noticable problems
[19:13:53] <Zelest> but that was from prod to dev/testing :)
[19:16:16] <Derick> dump and restore should work, but I would recommend the incrementation upgrade through 3.0 instead.
[19:16:40] <Zelest> Why so?
[19:17:39] <kurushiyama> Just a few days ago, somebody tried the same stunt, and it worked. But it was a popcorn and beer session, tbh.
[19:18:22] <kurushiyama> I would not do it for one simple reason: Running production data on an operating system not officially supported by the DBMS vendor is not the best idea to begin with.
[19:19:27] <kurushiyama> The upgrade process as described in the respective docs is tested and basically guarantee a flawless procedure. Everything else... not so much.
[19:19:56] <kurushiyama> Plus: Mongodump and especially restore take AGES when compared to a rolling replset backup.
[19:20:34] <kurushiyama> And it is without downtime.
[19:20:45] <kurushiyama> That pretty much sums my reasons up.
[19:22:44] <Zelest> Sounds legit :)
[19:23:25] <Zelest> I'm just curious on how to do it.. Seeing I only have 2 clusters runnig now.. one on 2.7 and one on 3.2, in stand-by, ready to become the new one..
[19:23:45] <Zelest> Mayhaps I can downgrade it to 3.0 and go from there :)
[19:24:26] <kurushiyama> Zelest: Bad news: I am not sure wether downgrade is supported.
[19:24:43] <Zelest> it is, the new system has no data yet :)
[20:02:21] <Ryzzan> having a doc like this: {_id : 123, someArray : [{_uniqueId : 321, otherArray : ['value1', 'value2']}, {_uniqueId : 654, otherArray : ['value3', 'value4']}]}
[20:02:40] <Ryzzan> how to pull, let's say, 'value3' from otherArray
[20:02:42] <Ryzzan> ?
[20:04:47] <kurushiyama> Ryzzan: Now you see what I meant with overembedding.
[20:05:09] <Ryzzan> kkkk
[20:05:16] <Ryzzan> so what would be the best practice?
[20:06:59] <kurushiyama> Ryzzan: Well, the example is a bit abstract. A sample doc would be much more helpful. Please pasetbin.
[20:11:55] <Ryzzan> kurushiyama: http://pastebin.com/0SKqpupE
[20:12:04] <Ryzzan> how to pull "Teste" in relacaoNome
[20:12:18] <kurushiyama> Witzbold!
[20:12:38] <Ryzzan> the properties are in portuguese... hope it won't be a problem...
[20:14:00] <kurushiyama> Sure my portuguese consists of "sardinhas assadas" ;)
[20:14:35] <Ryzzan> kurushiyama: that would be enough to survive in brazil
[20:14:37] <Ryzzan> lol
[20:15:20] <kurushiyama> "Agua" might be helpful, too. caipirinha, if I am daring.
[20:15:57] <kurushiyama> Ok the thing is that your data model incorporates some false, if not outright dangerous assumptions.
[20:16:54] <kurushiyama> And I can see why: you use Mongoose. :/
[20:17:28] <Ryzzan> oh... that...
[20:18:44] <kurushiyama> Sorry, my dinner needs some attention. Will be back later
[20:19:31] <Derick> om nom
[20:20:32] <Ryzzan> kurushiyama: ty... gonna work on it
[20:21:32] <Ryzzan> kurushiyama: translating properties would be helpful?
[20:31:02] <Zelest> silly question, if I have a replicaset configured, all databases are replicated, right?
[20:33:05] <kurushiyama> Ryzzan: More or less: it would not hurt. And properties should always be in English, as it is IT-Worlds lingua franca
[20:40:11] <kurushiyama> Zelest: Aye
[20:40:34] <kurushiyama> Zelest: except for - who would have guessed it - local.
[20:43:36] <yopp> cheeser, any mongoid guys over here?
[20:43:52] <yopp> or girls
[20:44:06] <Ryzzan> kurushiyama: http://pastebin.com/W8z44PNJ
[20:44:07] <Ryzzan> :)
[20:44:21] <Zelest> kurushiyama, hehe, figured, thanks :D
[20:45:06] <kurushiyama> Ryzzan: Ok, let me have my post dinner cigarette and espresso, then I'll be back.
[20:45:24] <Ryzzan> kurushiyama: ty
[20:56:17] <kurushiyama> Ok, back
[20:56:39] <kurushiyama> Ryzzan: The problem we have is that "Teste" is in multiple subdocuments.
[20:57:06] <kurushiyama> Ryzzan: The problem is that only the first match will be updated.
[20:57:37] <kurushiyama> Ryzzan: So again: "Overembedding is the root of all evil". You can quote me.
[20:57:40] <kurushiyama> Ryzzan: ;)
[20:58:33] <Ryzzan> kurushiyama: There's no way to pull an array element using its index!?
[20:59:00] <kurushiyama> Ryzzan: You can even pull by value. But only the first occurence.
[20:59:57] <Ryzzan> kurushiyama: ok, ty... gonna keep it in mind
[20:59:58] <kurushiyama> Ryzzan: I told you before and I do it again: You need to remodel your data. You will run into problems again and again.
[21:00:51] <Ryzzan> kurushiyama: just studying the possibilities... kindda disappointing not being able to pull array data by any index... :)
[21:02:04] <kurushiyama> Ryzzan: Well, it is a matter of attitude. If you try to slice bread with a screwdriver, you might be disappointed by the results as well ;)
[21:05:18] <Ryzzan> kurushiyama: ty, sensei
[21:06:24] <kurushiyama> Ryzzan: Dōitashimashite, Ryzzan-sama!
[21:07:09] <Ryzzan> kurushiyama: let me know when u r coming to brazil so i can buy u some caipirinha and brazilian café
[21:07:11] <Ryzzan> :D
[21:07:33] <kurushiyama> Ryzzan: Brasil is not exactly small. Where are you located at?
[21:07:59] <Ryzzan> northeast litoral...
[21:08:11] <Ryzzan> city of Maceió - Alagoas
[21:08:49] <Ryzzan> northeast coast, i meant
[21:08:59] <kurushiyama> Ryzzan: Creepy. 1M citizens and I have never even heard of it...
[21:09:21] <Ryzzan> http://www.guiamaceio.com/imgs/geral/home/maceio_564x300.jpg
[21:09:40] <kurushiyama> Ryzzan: Just doing wikipedia research. Beautiful.
[21:11:07] <Ryzzan> kurushiyama: rio de janeiro is all about "propaganda"... beaches in the northeast are as beautiful as over there... and drinkds and food are way cheaper and tasty
[21:11:08] <Ryzzan> :)
[21:11:11] <kurushiyama> Ryzzan: How far is Florianopolis?
[21:12:03] <Ryzzan> kurushiyama: in a far far away galaxy, master... i'd say 7200 km
[21:12:05] <Ryzzan> lol
[21:12:22] <kurushiyama> Ryzzan: Well, that is an argument. A friend of mine got robbed in Rio. Thanks god she was prepared and the guys were kind of nice.
[21:13:34] <Ryzzan> kurushiyama: if u come consider visiting northeast, that's where "cariocas" spend their vacations
[21:13:35] <Ryzzan> :)
[21:15:04] <kurushiyama> Ryzzan: It is about 2.3k That would be manageable.
[21:15:54] <kurushiyama> Might well be that I am in Florianopolis in November or December.
[21:16:22] <Ryzzan> rabbadesalman@gmail.com... let me know if u come...
[21:16:24] <Ryzzan> :)
[21:16:46] <kurushiyama> DO NOT POST EMAIL ADRESSES IN A LOGGED CHANNEL...
[21:16:50] <Ryzzan> Florianópolis is great as well... people are really nice over there...
[21:16:57] <Ryzzan> lol
[21:16:59] <Ryzzan> my bad
[21:17:17] <kurushiyama> Aye. Happy spam-sort!
[21:17:18] <Ryzzan> but thats not my "main"
[21:17:42] <Ryzzan> its kinda the "spam one"
[21:18:21] <Ryzzan> kurushiyama: going home now... c ya... ty again
[21:18:39] <kurushiyama> Well, I am based in Germany. Which is small enough to say: If you come over, let me know ;)
[21:20:00] <Ryzzan> Ryzzan: arigato gozaimasu... sayonara
[21:20:03] <Ryzzan> :P
[21:50:36] <vicatcu> hey all, question - i want to get the results of a collection with a condition, and limit and skip to paginate the results, but i also want to know how many results there are in all
[21:52:12] <vicatcu> so i've got db.find({field: search_value.}, {skip: 20, limit: 10}, function(err, results){})
[21:52:32] <vicatcu> is there any way to get results to include the total count as well as the skipped and limited results?
[21:55:08] <vicatcu> seems like i could do it pretty easily with a nested callback
[21:55:16] <vicatcu> that feels dirty though
[22:11:33] <Ryzzan> kurushiyama: help me think as a nosql developer... to make a relation between two users in sql, i would have a table with users data and another one with users id and its relation description... then i would join these table to deal with info about users and theirs relations...
[22:11:46] <Ryzzan> kurushiyama: how must i thinks as a nosql developer?
[22:12:15] <kurushiyama> Ryzzan: Think of the use case.
[22:12:32] <kurushiyama> Ryzzan: Let us say user A follows user B
[22:13:21] <kurushiyama> Ryzzan: You could either have {_id:"A", follows:["B"]} or {_id:"B", followed:["A"]}
[22:13:31] <kurushiyama> Ryzzan: What is the problem?
[22:15:48] <kurushiyama> Ryzzan: It is pretty obvious... ;)
[22:17:20] <Ryzzan> kurushiyama: wjat if user A has more then one kind of connection with B
[22:18:02] <Ryzzan> kurushiyama: let's say {_id:"A", follows:[user:"B", conecctions:["lover", "killer"]]}
[22:18:07] <kurushiyama> Ryzzan: Well, the more obvious problem is that there is a limitation on how many users A can follow or by how many users B can be followed, but your explanation is correct, too.
[22:18:45] <kurushiyama> Ryzzan: This is because of the 16MB size limit of BSON documents.
[22:19:08] <Ryzzan> hmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
[22:19:21] <kurushiyama> So, in order to get rid of that limitation, what should you do?
[22:20:00] <Ryzzan> i think 16M of followers r a hell lots of followers
[22:20:38] <kurushiyama> Well, only if you do not take additional fields into account. And maybe you do not only store an id, but more data.
[22:20:41] <Ryzzan> but, answering ur questions, and remembering ur quote... i should get away from embeding
[22:20:56] <kurushiyama> Ryzzan: ye
[22:20:59] <kurushiyama> Ryzzan: Aye
[22:21:31] <Ryzzan> kurushiyama: but what about dealing with multiple connection between two users?
[22:22:26] <kurushiyama> Ryzzan: Short: { user1:"A", relation:"loves", user2:"B"}
[22:22:26] <Ryzzan> kurushiyama: should i deal with these connections in another document? then join then? wouldn't it be a sql way?
[22:22:37] <kurushiyama> Ryzzan: Well, it depends
[22:22:57] <kurushiyama> Ryzzan: Say you want to show a given user all people that love him by name
[22:23:08] <kurushiyama> So, you'd adjust your model.
[22:24:52] <kurushiyama> {user1:{id:"A",name:"John Doe"}, relation:"loves",user2:"B"}
[22:25:29] <Ryzzan> kurushiyama: got it... not that easy for me not thinking the sql way... it was a long relationship... but ty for showing me the way... gonna work over it
[22:25:38] <kurushiyama> So, if you want to show user B who loves him, you simply query db.relations.find({user2:"B", relation:"loves"})
[22:26:18] <Ryzzan> kurushiyama: understood...
[22:26:45] <Ryzzan> dinner time... c ya...
[22:39:15] <Zelest> omg omg omg! i can haz a 3.0 node in my cluster! *screams like a teenage girl*
[22:43:07] <kurushiyama> Zelest: congrats!
[22:44:22] <jc3`> would skip() performance still be an issue if you are only ever querying/paginating a subset of a few thousand docs at any given time within a large collection?
[22:45:15] <Zelest> kurushiyama, thanks :D