PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 4th of October, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:45:29] <sector_0> how do password protect my database?
[02:45:42] <sector_0> does the database need to be named 'admin'?
[02:56:25] <sector_0> I've been good
[02:56:33] <sector_0> a bit busy though
[02:57:03] <sector_0> oops wrong window
[02:57:04] <sector_0> lol
[03:33:17] <acidjazz> hey all, sorry for hte noob question, what is the operator again to check if something is w/in a nested array
[03:35:51] <acidjazz> is it $elemMatch ?
[09:51:26] <spleen> join #fail2ban
[13:42:31] <Andrew_perminov> Hi guys! Tell me please. Can i use mongodb + wiredtiger + btrfs on production?
[13:50:02] <kurushiyama> Andrew_Perminov You may. But I really, really really would not.
[13:50:23] <shayden> lol
[13:51:21] <kurushiyama> Oh, that's not what I meant ;) What I was trying to say is that one surely can run this setup, but I'd strongly advice not to.
[13:52:14] <Andrew_Perminov> :)
[13:52:29] <kurushiyama> Andrew_Perminov The underlying question to me is why you would prefer btrfs over XFS or ext4.
[13:53:00] <shayden> there is also this: https://www.phoronix.com/scan.php?page=news_item&px=Btrfs-RAID-56-Is-Bad
[13:53:11] <shayden> not sure if it's relevant for your use case, but just FYI
[13:55:57] <Andrew_Perminov> kurushiyama btrfs? it has snapshots) + optimized for ssd + know her
[13:58:39] <kurushiyama> Andrew_Perminov XFS has snapshots too (as does LVM, which I very much prefer to use). Optimized for SSD? Uhm, well, it still performs _much_ worse: http://www.phoronix.com/scan.php?page=article&item=linux-40-ssd&num=1
[13:59:05] <Andrew_Perminov> shayden thx) but i planed to use raid 1 on two ssd for mongodb)
[14:00:28] <kurushiyama> Andrew_Perminov Keep in mind that we are not talking of sequential reads for WT.
[14:01:33] <kurushiyama> Andrew_Perminov Furthermore, for production, I would prefer a battle tested FS over cutting edge. For more performance, shard.
[14:07:35] <Andrew_Perminov> thx, i reviewed an idea about mongodb on btrf and try lvm + xfs :)
[14:12:58] <Andrew_Perminov> kurushiyama, shayden mdadm + lvm + xfs =? this idea has a right to life? Server on 2 SSD
[14:15:17] <kurushiyama> Andrew_Perminov You are planning a _soft_ raid?
[14:16:04] <Andrew_Perminov> yes
[14:16:35] <kurushiyama> Andrew_Perminov Well, that is not the wisest decision...
[14:17:26] <Andrew_Perminov> unfortunately I'm limited in resources
[15:23:19] <StephenLynx> GothAlice, are you experienced with web?
[15:23:33] <StephenLynx> I hit kind of a wall and I think it's a dead end.
[15:24:14] <StephenLynx> I have this page redirect that must be both delayed and preserve the referer.
[15:24:33] <StephenLynx> if it didn't had to be delayed I would use a 302
[15:24:45] <GothAlice> No reasonable way to do that, at least, if the goal is to _actually_ preserve the Referrer header.
[15:24:54] <GothAlice> The browser controls that, not you.
[15:25:13] <StephenLynx> yeah, it must be preserved for the CSRF check.
[15:25:21] <StephenLynx> I refuse to serve dynamic requests without it.
[15:25:49] <GothAlice> Well, that's CSRF, but not real protection. I.e. cURL can forge that all day.
[15:26:01] <StephenLynx> I know, the goal is not that.
[15:26:13] <StephenLynx> the goal is to keep people to just put forms on their sites that could fool users.
[15:26:29] <StephenLynx> these forms could just send the auth cookies and perform operations.
[15:26:36] <StephenLynx> this is the scenario:
[15:26:51] <GothAlice> StephenLynx: I'm at work. I, unfortunately, do not have time to review an over-complicated scheme. ;P
[15:26:58] <StephenLynx> is not complicated.
[15:27:26] <StephenLynx> I put a form on my malicious site to the attacked site to perform an operation that requires authentication.
[15:27:53] <StephenLynx> the user has to do nothing but click a button to the browser to send the request to the attacked site using it's authentication cookies.
[15:28:30] <StephenLynx> this is why I require the referer to exist and be from the same origin than the requested site.
[15:29:29] <StephenLynx> but when you use refresh to make an automatic delayed redirect, not all browsers sent the referer.
[15:30:46] <StephenLynx> so I have to either remove the automatic redirect and make it manual or stop displaying the intermediate page with the previous operation's result.
[15:31:03] <StephenLynx> I'm more inclined to the former.
[20:44:58] <sergio_101> if i had a bunch of records that had a 'tags' field. (an array of tags). what is the best practice to get all records that have a specific tag? ie.. find all records that have the tag 'new'.
[20:45:56] <StephenLynx> try a regular match.
[20:46:05] <StephenLynx> I think it works for that case.
[20:46:42] <Derick> yes
[20:46:51] <Derick> db.collname.find( { tags: 'new' } );
[20:47:00] <sergio_101> thanks.. looks good.. https://docs.mongodb.com/manual/reference/operator/aggregation/match/
[20:47:11] <Derick> you don't need aggregation
[20:47:20] <sergio_101> actually, the tags would be an array
[20:47:26] <Derick> yes
[20:47:28] <Derick> db.collname.find( { tags: 'new' } );
[20:47:37] <sergio_101> perfect.. thanks
[23:33:41] <Zodd> hello
[23:34:48] <Zodd> Is sharding used to establish multiple mongodb instances (shards) for each region (zone) so that you write or read documents from the nearest mongodb shard?
[23:35:43] <Zodd> isn't this supposed to be application specific? do you set a document attribute that specifies which shard to access?
[23:36:14] <Zodd> is this what is called range based sharding? where the range is the list of zones?
[23:36:22] <Zodd> please someone clarify