PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 15th of April, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[07:32:24] <mroman> Can I create users in a db to grant access to other dbs?
[07:33:06] <mroman> I.e. can I use a db called "auth" and then create my users in that database giving them read permissions to other databases?
[07:53:18] <Boomtime> @mroman: yes
[07:54:15] <Boomtime> by the way, the only place where users are actually stored is in the admin database, the db named for the user is an association as part of their login information, thus there are 3 pieces for credentials; db, username, password
[07:59:27] <mroman> Boomtime: It seems to work from the mongoshell
[07:59:32] <mroman> using the mongo driver it fails horribly
[08:00:31] <mroman> Exception authenticating MongoCredential{mechanism=null, userName='user_rw', source='users', password=<hidden>, mechanismProperties={}}}
[08:00:51] <mroman> it works when I use the mongo shell and do "use users; db.auth('user_rw',<hidden>); "
[08:01:28] <mroman> (I've created the user by doing "use users; db.createUser(...)")
[08:01:55] <mroman> or is source always supposed to be admin?
[08:02:10] <mroman> (I'm using mongodb://user_rw:<hidden>@localhost/?authSource=users to connect)
[08:04:02] <mroman> (well using authSource=admin doesn't work)
[08:10:33] <mroman> the mongod.log says "user not found"
[08:10:42] <mroman> but yet I can auth in the shell o_O
[08:11:51] <mroman> that suggests a typo somewhere in the config :D
[09:14:38] <kurushiyama> mroman: "mechanism=null" is my bet here.
[09:16:32] <mroman> nah it was a type in the connect uri :D
[09:16:35] <mroman> *typo
[09:31:45] <kurushiyama> mroman: Code or it didn't happen ;)
[10:11:37] <diegoaguilar> hello is 1gb of ram worth for a pretty small application
[10:11:53] <diegoaguilar> I mean, less than 100 qps to mongo
[10:12:02] <kurushiyama> diegoaguilar: That is _very_ small
[10:12:18] <diegoaguilar> so 1gb will make it
[10:12:47] <kurushiyama> diegoaguilar: and 100qps can be anything from no load at all to busting a machine with 32GB of RAM on SSDs, depending on the query.
[10:13:00] <kurushiyama> diegoaguilar: I would not run anything with 1GB
[10:13:31] <kurushiyama> diegoaguilar: 2GB at least, taking into account that WT wants at least 1GB of cache in a sane env.
[10:16:11] <diegoaguilar> kurushiyama, do u know a better alternative to mongolab?
[10:19:09] <kurushiyama> diegoaguilar: Huh? Which plan @mongolab has 1GB of RAM?
[10:19:24] <diegoaguilar> no no
[10:19:29] <diegoaguilar> just an alternative to mongolab
[10:20:33] <Derick> kurushiyama: their "shared" plan
[10:20:45] <kurushiyama> diegoaguilar: They are actually pretty good. Mongosoup is halfway popular here (for other than technical reasons)
[10:21:26] <kurushiyama> Derick: Thought it was shared memory as well.
[10:21:48] <Derick> I dunno
[10:21:52] <Derick> I just read their pricing page
[10:21:57] <kurushiyama> Lemme check
[10:22:03] <diegoaguilar> https://www.mongosoup.de/en/features.html
[10:22:31] <kurushiyama> Thats storage, not RAM. ;)
[10:23:18] <kurushiyama> diegoaguilar: Yes, that's them
[10:27:25] <kurushiyama> diegoaguilar: However, before you choose one, you really should do some data simulations to find your expected data growth and so on. I would not choose anything before that is done. For one of my customers, we found out that is the cheapest approach to host MongoDB himself on metal and employ a dedicated admin with me as a backup.
[10:27:48] <kurushiyama> diegoaguilar: for him, that was.
[10:27:51] <diegoaguilar> well, for now is really a prototype app
[10:28:12] <diegoaguilar> btw i meant 100 qpm :P
[10:28:37] <kurushiyama> diegoaguilar: Gall's law ;)
[10:29:39] <diegoaguilar> so working simple system first :P
[10:30:02] <kurushiyama> diegoaguilar: Well, can still bust you. What queries? What is the average doc size / collection. What is the data growth per use case? and so on. Yes. working simple systems first.
[10:31:54] <kurushiyama> diegoaguilar: But what I meant is the corollary of Gall's law. If you have a successful simple system, it is likely to evolve into a more complex system. You should plan ahead for that.
[10:35:17] <kurushiyama> diegoaguilar: What I'd do is to try MongoLabs. If you have performance problems, you have learned something.
[10:36:39] <diegoaguilar> well I find myself in a doubt ... because as I told u, this is like a prototype, budget is not the best "now", and it seems quite cheaper to run own ec2 with 4gb of ram against first mongolab plan
[10:37:30] <diegoaguilar> they ran with mongolab sandbox for a while (lol) and it worked, but Im not sure about sandbox outages and mantainance windows
[10:37:57] <kurushiyama> diegoaguilar: Unless you take the maintenance work into account. Here is my reasoning: If it works with MongoLabs "free", you save money. redecide if not.
[10:38:11] <kurushiyama> diegoaguilar: Based on hard facts, this time, namely stats.
[10:38:53] <diegoaguilar> I just got used to a previous environtment where they paid an expensive mongohq :P
[10:39:10] <kurushiyama> diegoaguilar: And running MongoDB on EBS can be tricky.
[10:39:14] <diegoaguilar> I dont know if I can get noticed of coming maintenance periods
[10:39:17] <diegoaguilar> really? :o
[10:39:42] <kurushiyama> Think of it: EBS is a network based storage in a shared env.
[10:40:09] <diegoaguilar> emm right ...
[10:40:58] <kurushiyama> It is by no means impossible to run MongoDB on EBS, but identifying the bottleneck can become quite tricky, especially on the cheaper instances. And provisioned IOPs arent exactly cheap.
[10:42:08] <diegoaguilar> I wish Google Compute Engine was a bit cheaper
[10:44:01] <kurushiyama> I use Openshift for about everything.
[10:44:18] <diegoaguilar> do u have a mongo 3.2 cardtrige?
[10:44:19] <diegoaguilar> :P
[10:44:21] <kurushiyama> Yes
[10:44:24] <diegoaguilar> really? :)
[10:44:54] <kurushiyama> Let me check
[10:45:01] <kurushiyama> May just be a 3.0
[10:45:11] <kurushiyama> Which is what I suggest anyway.
[10:45:49] <kurushiyama> Yep, a 3.2.3
[10:46:02] <kurushiyama> Darn, can one downgrade?
[10:46:14] <diegoaguilar> uh?
[10:46:41] <kurushiyama> diegoaguilar: I had bad experience with the bleeding edge.
[10:46:44] <kurushiyama> diegoaguilar: http://cartreflect-claytondev.rhcloud.com/github/icflorescu/openshift-cartridge-mongodb
[10:47:07] <diegoaguilar> why?
[10:47:08] <kurushiyama> diegoaguilar: Was updated, already.
[10:47:41] <diegoaguilar> well having mongo at openshift, means I must run my web apps at openshift too, right?
[10:47:45] <kurushiyama> diegoaguilar: Well, I'd "credit" it to a rushed integration of the pluggable storage engines (which is just a theory of mine).
[10:47:54] <kurushiyama> diegoaguilar: Correct.
[10:48:02] <kurushiyama> diegoaguilar: Which I see as an advantage, really.
[10:48:13] <diegoaguilar> mhm .. it is ...
[10:48:31] <diegoaguilar> what does openshift requires to run a node app there?
[10:48:32] <kurushiyama> diegoaguilar: I have basically no admin work to do, just run my apps.
[10:48:36] <diegoaguilar> some env?
[10:48:50] <kurushiyama> diegoaguilar: Iirc, there is a predefined node env
[10:49:13] <diegoaguilar> hehehehe :D
[10:49:23] <diegoaguilar> why languge do u prefer
[10:49:27] <diegoaguilar> which*
[10:50:30] <kurushiyama> Go
[10:50:49] <kurushiyama> And came from Java.
[10:51:16] <diegoaguilar> I want to start with Go
[10:51:23] <kurushiyama> play.golang.org
[10:51:29] <kurushiyama> tour.golang.org
[10:51:37] <kurushiyama> And #go-nuts
[10:52:12] <kurushiyama> diegoaguilar: Just checked, there is a predefined node cartridge.
[10:52:23] <kurushiyama> 0.10, though.
[10:52:34] <kurushiyama> But I bet there are community cartridges.
[10:52:48] <diegoaguilar> do u use docker?
[10:54:12] <kurushiyama> In some cases, yes.
[10:54:22] <diegoaguilar> I wonder if these services like tutum
[10:54:29] <diegoaguilar> really let u run productions apps for FREE? :o
[10:55:32] <lipiec> I would like to disable warning about "WARNING: Access control is not enabled for the database." Is it possible in new mongod 3.2.5 version?
[10:55:47] <kurushiyama> Oh, I think that 0.10 is the cartridge version.
[10:56:16] <kurushiyama> lipiec: https://docs.mongodb.org/manual/tutorial/enable-authentication/
[10:56:42] <mroman> http://codepad.org/UfZu1B9D pfft....
[10:57:04] <kurushiyama> mroman: first line "You are using a 32-bit build"
[10:57:06] <mroman> Mongo definitely ain't it when you try running huge DBs on low ram.
[10:57:14] <mroman> yeah so?
[10:57:21] <kurushiyama> mroman: 2GB RAM limit
[10:57:24] <mroman> so?
[10:57:31] <kurushiyama> mroman: HIGHLY discouraged for production.
[10:58:01] <mroman> having huge DBs and low ram was kinda the "way of life".
[10:58:31] <mroman> I don't get why I'm running into memory issues constantly when using mongodb
[10:59:29] <kurushiyama> mroman: MongoDB is _not_ aimed at that. It is optimized for doing _huge_ data on reasonable hardware, with "reasonable" defined as the "_most_ bang for the bug", not "cheapo"
[10:59:43] <mroman> You restart the machine and suddenly it crashes on startup
[11:00:12] <mroman> yeah but this sucks for reliability :)
[11:00:16] <kurushiyama> mroman: Well, maybe you should either adapt your expectations or maybe MongoDB is simply not the right tool.
[11:00:21] <mroman> somebody can just crash the whole thing with an insert
[11:00:22] <kurushiyama> mroman: No.
[11:00:43] <kurushiyama> mroman: I never had any problem with regards to reliability as long as I followed best practises.
[11:01:15] <mroman> well...
[11:01:31] <mroman> If a person can crash your DB with a Query
[11:01:34] <mroman> that's not really stable
[11:01:58] <mroman> If a person can crash your DB with an Insert and then you can't even start the DB again, that's not really stable :)
[11:02:11] <kurushiyama> mroman: An _discouraged_ version
[11:02:27] <kurushiyama> mroman: Why can't you upgrade to a 64bit built?
[11:02:56] <mroman> I can :)
[11:03:02] <lipiec> kurushiyama: I would like disable this warning at all, not enabling auth...
[11:03:07] <mroman> I'm just testing it out on different hardware, different infrastructure
[11:03:16] <mroman> different distros
[11:03:17] <kurushiyama> diegoaguilar: I do not see the point in putting an abstraction onto docker.
[11:03:50] <kurushiyama> mroman: Well, you learned sth. ;) Do not use 32-bit build at all, and low mem envs are a good way to shoot yourself into the foot.
[11:05:01] <mroman> well
[11:05:15] <mroman> If I can't run a 740MB db on a machine with 4GB RAM
[11:05:20] <mroman> I mean
[11:05:25] <kurushiyama> mroman: If you want to limit resources, use something like docker
[11:05:28] <mroman> technically the whole DB would fit perfectly into an in-memory db
[11:05:54] <kurushiyama> mroman: Sorry, come again when you tried that with a 64-bit build ;P
[11:06:05] <mroman> It's just not what I'd expect from a DBs :)
[11:06:27] <kurushiyama> mroman: File a ticket, then.
[11:07:33] <mroman> I don't think the version distros ship with are still supported :p
[11:07:36] <kurushiyama> It is not that I am not concerned about this behavior, but I have to little information about OS, FS, RAM, swap and whatnot. personally, I tend to assume it is my fault, first.
[11:08:22] <kurushiyama> And until I have positively ruled that out, I blame the tool. More often than not, it was my fault, regardless of how hard I tried to rule it out.
[11:09:20] <mroman> well I have 8GB/10GB swap
[11:09:46] <mroman> but if it's really trying to mmap whole files I mean
[11:09:48] <mroman> :)
[11:10:07] <mroman> mmaping all the hashes.* files will take more than 2GB at least
[11:10:13] <kurushiyama> mroman: Uhm... mmap does just that: It _maps_ files. It does not load them into RAM
[11:10:32] <kurushiyama> mroman: Only on access.
[11:11:14] <kurushiyama> mroman: But still: try to run a 64bit.
[11:11:55] <kurushiyama> mroman: Wait, you are not trying to limit the RAM usage to run your app and the DB on the same machine, do you?
[11:14:59] <kurushiyama> lipiec: It is a warning. One with a reason. Ignore it on your own risk.
[11:15:56] <kurushiyama> lipiec: https://securityintelligence.com/news/mongodb-databases-may-be-exposed-by-security-misconfigurations/
[11:19:19] <spuz> hi, how do i query for a field that is a String that exists and is not empty and is not null?
[11:19:52] <spuz> I've tried: db.Collection.find({'field':{$exists:1, $ne:"", $ne:null}})
[11:20:01] <spuz> but it returns entries where the field is empty
[11:21:34] <kurushiyama> spuz: I _think_ $exists and $ne:null are redundant (and I have to double check for the null check), because non-existent fields eval to null, anyway.
[11:22:03] <mroman> kurushiyama: no I'm not trying to do that.
[11:22:20] <spuz> kurushiyama: if i remove the exists check, i get records where the field does not exist
[11:22:29] <mroman> but the app will run on the DB machine :D
[11:23:29] <mroman> It's a weird environment :(
[11:23:36] <mroman> I would do it otherwise if I had the resources.
[11:24:02] <mroman> also I'd implement a backup plan :)
[11:24:51] <mroman> but I'm not sure whether you can actually backup a running mongo db
[11:25:11] <mroman> for the reason of lacking true transaction semantics.
[11:25:43] <kurushiyama> mroman: That equals to a situation you do not want: On high loads, the app and the db will battle for IO and RAM.
[11:25:52] <kurushiyama> mroman: You cant
[11:26:06] <kurushiyama> mroman: well, not with copying data files
[11:26:53] <mroman> same issue: If you want to update multiple documents and the backup tool reads between updates you're fucked.
[11:26:57] <kurushiyama> mroman: Even DBs with true transaction semantics do not necessarily allow that.
[11:27:25] <mroman> what? Taking a snapshot from a running db?
[11:27:31] <kurushiyama> mroman: LVM snapshots on wT work. You _could_ even copy wT data files, due to its internal workings.
[11:27:33] <mroman> That's the equivalent of doing a SELECT *
[11:27:44] <mroman> and with true transactions that is guaranteed to return a consistent state
[11:28:03] <kurushiyama> mroman: you are talking of a dump.
[11:28:04] <kali> stop a replica and copy/snapshot the files...
[11:28:37] <mroman> well yeah :)
[11:28:44] <kurushiyama> mroman: But if you have multiple tables, you can be f...ed with a SELECT, too.
[11:28:45] <mroman> dump is the most primitive kind of backup
[11:28:52] <mroman> the less primitive would be incremental backups
[11:29:16] <mroman> kurushiyama: if the application writing to the DB doesn't use transcation then yes.
[11:29:21] <mroman> *transaction
[11:29:32] <spuz> ok, the answer is this apparently: db.Collection.find({field:{$exists:1, $nin:["", null]}})
[11:29:39] <kurushiyama> mroman: In general, if you want a RDBMS, use it. But do not expect to use a new technology to behave the same as you learned from a different tech in the past.
[11:29:48] <mroman> you can't be sure unless apps writing to the DBs use transactions.
[11:30:26] <mroman> I'm only planning on using it due to the heterogenous data and most of the data being JSON
[11:30:30] <kurushiyama> mroman: SELECT1 => dump && SELECT2 =>dump == race condition between dump and SELECTS.
[11:31:05] <mroman> how so?
[11:31:36] <mroman> well of course, those selects have to run as a transaction as well :)
[11:32:06] <kurushiyama> mroman: wT writes consistent data in what is called a checkpoint: https://docs.mongodb.org/manual/core/wiredtiger/#snapshots-and-checkpoints
[11:32:32] <kurushiyama> mroman: As said before: it heavily depends on the data modelling.
[11:33:00] <mroman> Yeah :)
[11:33:09] <mroman> embedding documents gives stronger guarantees :)
[11:33:17] <kurushiyama> mroman: If you do make use of the fact that document level operations are atomic, you can have perfectly consistent states backed up.
[11:33:27] <kurushiyama> mroman: No.
[11:33:45] <kurushiyama> mroman: Embedding documents is the root of most evil, as far as MongoDB is concerned.
[11:33:51] <mroman> Yes.
[11:34:09] <mroman> but not embedding them is the root of evil if you have concurrency
[11:34:14] <cheeser> kurushiyama: um. what?
[11:34:23] <Derick> i was about to say that too cheeser
[11:34:24] <kurushiyama> mroman: Knowing your use case, I repeat: Do a point-in-time documentation.
[11:34:51] <kurushiyama> cheeser: People tend to overembed. And most problems people have with MongoDB if they do that.
[11:34:57] <mroman> If you were to manage accounts, subtract $money there, put $money there.
[11:35:04] <kurushiyama> mroman: Wait a sec
[11:35:17] <mroman> and depending on what mongodump exactly does, it may very well read in-between.
[11:35:28] <kurushiyama> mroman: http://dba.stackexchange.com/questions/134841/how-can-we-ensure-security-and-integrity-of-data-stored-in-mongodb/134858#134858
[11:35:39] <kurushiyama> mroman: Read the example.
[11:35:47] <cheeser> i've not seen that be a problem
[11:35:59] <kurushiyama> cheeser: Overembedding?
[11:36:27] <cheeser> yeah. but i have to run. time to head to the office. :)
[11:36:44] <kurushiyama> cheeser: I'd love to discuss this later.
[11:38:01] <mroman> well sure if you don't store the balance then a transfer log works fine.
[11:38:20] <mroman> checking your balance will get horribly slow though over time :)
[11:38:38] <mroman> unless you consolidate transfers into a single transfer at some time
[11:39:19] <kurushiyama> mroman: Think again. Early match
[11:39:29] <mroman> but the usualy scenario is that you want to check that A has enough money, then subtract it, and add it to B.
[11:39:32] <mroman> *usual
[11:40:05] <mroman> unless you can check and subtract the money in a single step I don't see how this can possibly work.
[11:40:05] <kurushiyama> mroman: Sure. You can. Ever wondered why it sometimes takes time at the ATM. Because that is exactly what happens.
[11:40:36] <mroman> because you need to make sure that no subtraction is going on after you checked the balance
[11:41:37] <kurushiyama> mroman: What usually happens is that the account balance at a certain point in time is calculated and saved (once a day, iirc), and the differences are aggregated from a log, applied to the balance and the result is taken,
[11:42:07] <kurushiyama> mroman: Sure. And this works. Because the log is _atomic_
[11:42:41] <mroman> Yes, that's what I said. "unless you consolidate the transfers at some time"
[11:42:53] <kurushiyama> mroman: ON demand.
[11:43:07] <mroman> but can you consolidate atomically?
[11:43:21] <kurushiyama> Sure
[11:43:38] <mroman> because that requires delete operations too
[11:43:44] <kurushiyama> Say you want the balance for today 00:00:00
[11:43:53] <kurushiyama> mroman: No deletes.
[11:44:22] <mroman> I aggregate matching all transfers prior to today 00:00:00 adding the amount to the last balance "checkpoint"
[11:44:41] <kurushiyama> Or set it or whatever, right
[11:44:53] <kurushiyama> Now, customer wants to get his current balance.
[11:45:15] <mroman> you query the last checkpoint, aggregate the transfers and sum it up
[11:45:22] <kurushiyama> Correct
[11:45:40] <mroman> ok
[11:45:48] <mroman> and now you want to make sure he doesn't overdraw money ;)
[11:46:22] <kurushiyama> mroman: 2 possibilities here.
[11:47:37] <mroman> there's always the case that someone draws money from your account while you were aggregating the account or just shortly after.
[11:47:53] <kurushiyama> Well, sort of.
[11:47:56] <mroman> doing a simple if and new insert into the transfer log won't work.
[11:48:11] <kurushiyama> Stop.
[11:48:18] <kurushiyama> We are mixing two use cases here.
[11:48:31] <mroman> a = gimmeBalance(); if(toDraw < a) { insertNewTransfer(); }
[11:48:34] <mroman> not atomic.
[11:48:39] <kurushiyama> mroman: Stop
[11:48:48] <kurushiyama> mroman: We are mixing use cases here.
[11:49:25] <mroman> My use case is a fully concurrent environment where everything can happen at every time.
[11:49:32] <kurushiyama> mroman: The example I gave so far was just for account balancing.
[11:49:44] <kurushiyama> mroman: Now, let's add the withdrawal use case.
[11:50:02] <kurushiyama> mroman: You have some options here.
[11:50:21] <mroman> yes, you can insert the transfer anyway and check the validity of transfers when getting the current balance :)
[11:50:24] <kurushiyama> mroman: Personally, I'd suggest a 2-phase commit.
[11:50:50] <kurushiyama> mroman: however, optimistic locking would be an option, too.
[11:53:09] <kurushiyama> mroman: But in general, what this example was supposed to illustrate was that you can get consistent states in your database with carefull data modelling.
[11:58:04] <mroman> well first I need to get it not to crash so often :)
[12:02:34] <kurushiyama> mroman: Advice: use a 64bit build, do not deploy MongoDB with your app on the same server for production, have some swap, monitor for enabling you to make _early_ decisions.
[15:42:57] <mylord> how can I do this? db.users.find("this.gain" != null && "this.gain" < "this.withdrawTotal")
[15:43:08] <mylord> I get this: E QUERY [thread1] Error: don't know how to massage : boolean :
[15:44:15] <mylord> or this: b.users.find({"this.gain":{$exists:true}, $where: "this.gain" < "this.withdrawTotal"})
[15:45:00] <StephenLynx> wait
[15:45:36] <StephenLynx> you want documents where only the withdrawTotal are less than their own gain?
[15:46:22] <StephenLynx> you could try {gain:{$lt:'$withdawTotal'}} but I am not sure it would work.
[15:46:29] <Derick> don't use $where - it's a performance killer
[15:46:36] <Derick> StephenLynx: nope, only through A/F
[15:46:42] <StephenLynx> welp
[15:47:23] <StephenLynx> he will have to find all documents where gain exists and then filter on application code?
[15:51:00] <mylord> StephenLynx: yes
[15:51:42] <StephenLynx> {gain:{$exists:true}}
[15:51:57] <StephenLynx> hold on
[15:52:23] <StephenLynx> is there a possibility withdraw total doesn't exist on a document?
[15:54:11] <mylord> might not exist, i think
[15:54:32] <StephenLynx> {gain:{$exists:true}, withdrawTotal:{$exists:true}}
[15:54:41] <mylord> and the < part?
[15:54:49] <mylord> where to put that? can you paste then entire thing?
[15:54:53] <StephenLynx> that will have to be done on application code.
[15:58:27] <mylord> really.. querying is kind of hard in mongo?
[15:58:33] <StephenLynx> no.
[15:58:44] <StephenLynx> you just hit a limitation.
[15:58:47] <mylord> k
[16:23:22] <kurushiyama> StephenLynx: http://pastebin.com/tRcb95Mm
[16:23:50] <StephenLynx> wot
[16:24:06] <StephenLynx> ah
[16:24:10] <kurushiyama> StephenLynx: A field might exist and still be null, in which case $exists is triggered.
[16:24:34] <kurushiyama> So $exists is not a reliable null check
[16:24:44] <StephenLynx> ok, so it depends on his context.
[16:24:59] <StephenLynx> he might be checking for null but he expects the field to not exist at all.
[16:25:20] <StephenLynx> if the field being assigned to null is expected
[16:25:34] <kurushiyama> StephenLynx: Right. But we had something similar earlier this day, hence I played with that a bit ;)
[16:25:55] <StephenLynx> he would have to use a $ne and/or the $exist otherwise
[16:27:14] <kurushiyama> I am thinking. If the field does not exist, it evals to null anyway. So an $ne:null might be more reasonable.
[16:27:41] <StephenLynx> hm
[16:27:49] <StephenLynx> true
[16:32:49] <kurushiyama> One step further, if a: null and b = 1, b > a == true
[16:33:30] <StephenLynx> not really
[16:33:40] <kurushiyama> Well, in the shell it is
[16:33:43] <StephenLynx> in js that would yield a false.
[16:33:49] <StephenLynx> null > 1 = false
[16:33:53] <StephenLynx> 1 > null = false
[16:34:13] <StephenLynx> oh shit
[16:34:23] <StephenLynx> actually no :v
[16:34:27] <StephenLynx> 1 > null = true
[16:34:33] <kurushiyama> http://pastebin.com/9A3iGebV ;)
[16:34:35] <StephenLynx> oh my god
[16:34:40] <StephenLynx> I have stuff to check
[16:35:13] <kurushiyama> ;)
[16:42:38] <StephenLynx> ok, so here is how it works
[16:43:06] <StephenLynx> undefined will always make a comparison yield false
[16:44:04] <kurushiyama> I am no JS guy. undef == null? I'd guess not, but I honestly dont know.
[16:44:07] <StephenLynx> while null will yield true if checked if its smaller than any number
[16:44:09] <StephenLynx> is not.
[16:44:18] <StephenLynx> they share similarities
[16:44:22] <StephenLynx> but is not the same
[16:44:26] <kurushiyama> Ok
[16:44:55] <kurushiyama> So a comparison against null in this case is a proper solution, as far as I can see it.
[16:45:00] <StephenLynx> yes
[16:45:17] <StephenLynx> i guess.
[16:45:32] <StephenLynx> don't know if mongo will handle the field as null if it doesnt exist
[16:45:54] <kurushiyama> So, we could do an aggregation with a cond, for example.
[16:46:03] <kurushiyama> It does
[16:46:13] <kurushiyama> non-existing fields eval to null
[16:49:47] <StephenLynx> at least my code is safe
[16:49:56] <StephenLynx> because I actually handle undefined
[16:50:37] <StephenLynx> otherwise there would be a couple of sites with a pretty big hole in their authentication :^)
[16:54:10] <kurushiyama> StephenLynx: http://pastebin.com/CXw3NdwC would be the solution, then, maybe with an early match on $ne:null for either field.
[16:58:05] <kurushiyama> Not sure wether $redact utilizes indices, though. It is being described as a combination of $match and $project, but I guess that is semantic, not technical.
[17:02:22] <kurushiyama> And I am actually not sure wether this performs better than a $where
[17:33:28] <StephenLynx> IMO, that's all too complicated. I would just use application code and call it a day
[17:45:41] <kurushiyama> StephenLynx: Well, I guess it depends on how many docs we are talking about. A few k? Might work. A few M? Well, not so much.
[17:46:05] <StephenLynx> i dunno, if are that many, then your whole server probably got bogged down.
[17:47:08] <kurushiyama> I am just happy that this is none of my use cases. Way to many "if","possibly","maybe", "could be" to my taste.
[17:47:52] <StephenLynx> yeah
[19:01:37] <johnnyfive> Howdy. I'm having an issue with authentication. I enabled authentication, and I have a user with the "root" role. However when trying to login, I get "[conn9] SCRAM-SHA-1 authentication failed for admin on admin from client 127.0.0.1 ; TypeMismatch: Roles must be objects."
[19:03:19] <kurushiyama> johnnyfive: can you show us how you try to login?
[19:03:19] <johnnyfive> The entry for the user is the following: { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, etc..} }, "roles" : [ "root" ] }
[19:04:05] <johnnyfive> kurushiyama| i'm using MongoChef, but I can try on the local shell too. Sec
[19:10:32] <johnnyfive> ugh, from the shell it just tells me authentication failed. This is a mess
[19:10:45] <johnnyfive> but other users work
[19:12:28] <kurushiyama> johnnyfive: Well, at least you are not totally busted.
[19:12:42] <johnnyfive> well I can disable/enable authentication to reset things
[19:12:58] <johnnyfive> but I can't for the life of me figure out how to make an admin user with all rights
[19:13:09] <kurushiyama> johnnyfive: I'd probably do that asap. And ditch mongochef, tbh.
[19:13:23] <johnnyfive> do what asap?
[19:13:32] <kurushiyama> Reconfig auth
[19:14:23] <johnnyfive> and use the cli?
[19:14:32] <johnnyfive> ditch mongochef = use cli?
[19:15:49] <kurushiyama> Aye
[19:16:28] <johnnyfive> well I was using robomongo exclusively, but I can't figure out this roles issue so was hoping a gui tool might be able to fix this mess, but so far that's also a no go.
[19:17:07] <StephenLynx> yeah, GUI tools usually are bad for mongo
[19:17:33] <kurushiyama> Haven't seen one I trust, tbh.
[19:17:43] <johnnyfive> so any of you have the line to create a user with all permissions on the box?
[19:17:51] <johnnyfive> or willing to help me make one?
[19:18:48] <kurushiyama> johnnyfive: https://docs.mongodb.org/manual/tutorial/enable-authentication/
[19:19:12] <kurushiyama> Read it. Read it again. Until you are positively sure to understand it. Then do it.
[19:19:38] <johnnyfive> do roles always have to apply to a db?
[19:19:52] <kurushiyama> johnnyfive: Nag us (or me), i you have to. But make sure you _understand_.
[19:20:05] <johnnyfive> well I think that's the only issue i'm having
[19:20:22] <StephenLynx> I have no experience with users and permissions, my deploys are small enough to host the db on the same server
[19:20:26] <johnnyfive> I was under the assumption roles like "userAdminAnyDatabase" applied to the system, and not a specific db
[19:20:35] <StephenLynx> so I just don't add implementation and allow only local connections
[19:20:55] <kurushiyama> johnnyfive: Yes and no
[19:21:15] <johnnyfive> ok...
[19:21:25] <kurushiyama> johnnyfive: You need to authenticate against the database where the user is defined. But with that role, you have said permissions.
[19:21:38] <johnnyfive> oh fml, that's what i'm missing
[19:23:37] <johnnyfive> man that's not intuitive. but got it to work, thanks
[19:24:15] <johnnyfive> In fact the examples in the docs are wrong: https://docs.mongodb.org/v2.6/reference/method/db.createUser/#db.createUser
[19:24:41] <johnnyfive> oh that's cause it's 2.6. my bad. Thanks guys
[19:28:30] <kurushiyama> johnnyfive: Again: rather be safe and ask than simply do and be sorry ;)
[19:29:35] <StephenLynx> kek
[19:52:45] <cpama> hi all. just wondering if someone can help me out with this post: http://stackoverflow.com/questions/36651282/cant-start-mongodb-on-ubuntu?noredirect=1#comment60899882_36651282
[19:53:32] <cpama> if it is true that there is no support for ubuntu 15... maybe I have to set up a vm with an older version of ubuntu
[19:53:53] <cheeser> ubuntu 15 is not officially supported, no.
[19:54:07] <cpama> stink
[19:54:08] <cpama> okay
[20:07:30] <kurushiyama> cpama: Docker.
[20:07:50] <cpama> what is Docker? (googling...)
[20:08:23] <kurushiyama> You have made the first steps into a greater world, young padawan!
[20:09:13] <cpama> kurushiyama, ooh la la. sounds exciting
[20:09:15] <cpama> lol
[20:09:59] <kurushiyama> cpama: https://youtu.be/VeiUjkiqo9E
[20:10:55] <kurushiyama> I can not understand how we ever deployed complex applications without it.
[20:16:18] <kurushiyama> cheeser: Why were you blinking? Do you think it is a bad idea?
[20:16:57] <cheeser> oh, no. just that (s)he hadn't heard of docker yet.
[20:17:45] <kurushiyama> Well, some do it for fun only, some are very young. The younges attendant to our LUG is like 11.
[20:17:53] <cheeser> nice
[20:18:40] <kurushiyama> Yep. Never even touched Windows. I envy him.
[20:19:06] <cheeser> was just looking at cheap windows laptops, actually.
[20:19:34] <kurushiyama> To what end?
[20:19:38] <cheeser> the kids
[20:21:20] <kurushiyama> Well, the first computer my little boy will get will be in parts. And he'll get the tools to plug it together. Plus a Linux boot CD. Alpine, if he was a bad boy. ;)
[20:22:08] <cheeser> as much as i love putting computers together, there's not much room in my brooklyn apartment for that kind of shenanigans. ;)
[20:23:10] <cagmz> I started using linux at around 12. I was on AOL chat and asked a local LUG member for a live cd and he mailed several to me (xubuntu, lubuntu, ubuntu desk/server). it was ubuntu 5 i think
[20:23:32] <kurushiyama> cheeser: I see. Well, we live in a very small flat by choice. Small Home Movement, smaller footprint and such.
[20:24:21] <kurushiyama> cagmz: Go and learn docker, asap! ;)
[20:24:55] <kurushiyama> cagmz: Or, even better: Learn Go and Docker, asap! ;P
[20:27:13] <cagmz> is docker useful for personal use? it seems very enterprise
[20:28:04] <StephenLynx> docker is cancer, IMO
[20:28:30] <cheeser> everything is, though. *sigh*
[20:29:08] <StephenLynx> linux isn't :v
[20:29:34] <kurushiyama> cagmz: StephenLynx is not fond of things that are hiding what's going on, in general, as far as I can dare to say that.
[20:29:47] <StephenLynx> pretty much.
[20:30:10] <StephenLynx> unless they hide because they already offer me what I want in one whole package.
[20:30:15] <StephenLynx> like a VM, an engine
[20:30:28] <StephenLynx> a runtime environment, a database
[20:30:48] <StephenLynx> docker has that whole layer of things on top of emulating a computer
[20:32:00] <kurushiyama> StephenLynx: Not quite. Actually, it is paravirtualization and a good way of enforcing resource separation. Aside from the fact that it can be used to eliminate Dev's ultimate lie. ;)
[20:32:27] <StephenLynx> why would I bother about resource separation?
[20:32:44] <StephenLynx> and what's dev ultimate lie?
[20:32:48] <StephenLynx> "works on my machine"?
[20:32:53] <kurushiyama> StephenLynx: This!
[20:33:14] <StephenLynx> this is why you configure the VM the way the production environment should be.
[20:33:31] <StephenLynx> so you don't have any surprises on deploy.
[20:34:13] <cagmz> so with docker, an os is installed once, and docker containers share the same OS? compared to running N vm's, where each has it's own OS
[20:34:40] <kurushiyama> StephenLynx: Ignoring configuration, integration and other things? I am not talking of smallish things. I talk of systems taking 3 days for install, if each part is done manually.
[20:34:57] <kurushiyama> cagmz: Not quite. It uses the same kernel.
[20:35:03] <StephenLynx> why would it take 3 days to install?
[20:36:00] <kurushiyama> StephenLynx: We had a distributed system of like 15 different node types, to be run on a customers cloud which, to put it polite, was not exactly adhereing to standards.
[20:36:15] <cpama> drats. i feel like a missed a good chunk of an interesting conversation here.
[20:36:27] <cpama> just enabled VT and started my box
[20:36:42] <cpama> now going to try to create vm using 64 bit version of 14.04
[20:36:42] <kurushiyama> StephenLynx: Moved to Docker, and we reduced those times to a few hours.
[20:36:59] <kurushiyama> cpama: With Docker? or a real VM?
[20:37:09] <cpama> well, i'm going to take baby steps
[20:37:19] <cpama> going to try with what i have ... which is virtualbox
[20:37:34] <cpama> and then i will try out docker on a day when i have more time to play around.
[20:37:39] <cpama> it's 4:30 on a friday...
[20:37:39] <cpama> :)
[20:37:47] <cpama> don't feel like exerting too much brain power
[20:38:07] <jr3> if I have an arrays of object in a doc and I index that array does that help with property searchs on that array or is it still going to be collection scan
[20:38:18] <kurushiyama> StephenLynx: Don't ask why we needed to have 15 node types. It was due to some obscure supposedly legal requirements of the customer.
[20:38:40] <kurushiyama> jr3: Try and find out?
[20:39:02] <kurushiyama> jr3: do your query and append an `.explain()`
[20:39:19] <jr3> ok it indexes the values
[20:40:32] <kurushiyama> jr3: https://docs.mongodb.org/manual/indexes/#multikey-index
[20:41:09] <AMDPhenomX4Q> Hello, I need to access a property of an object which is an array, then access a property at a specific index of an array. I've been stuck at the array itself for a few hours and haven't figured out how to access a specific index of it. Any suggestions?
[20:41:39] <kurushiyama> AMDPhenomX4Q: Uhm... remodel?
[20:42:15] <cpama> arg. ok so I downloaded ubuntu-14.04.4-desktop-amd64.iso and tried to set up the vm. still getting error "This kernel requires an x-64-64 CPU, but only detected an i686CPU"
[20:42:19] <cpama> which is what I got before.
[20:42:28] <cpama> now going to try again with 32 bit iso
[20:43:35] <AMDPhenomX4Q> I can't but it must be something easy. I have this code http://hastebin.com/hugakisozo.css and so I have the array. Basically I've been stuck at accessing an index of it.
[20:43:50] <kurushiyama> cpama: What machine do you have? Do not get me wrong, but it sounds a bit like archeology...
[20:43:59] <cpama> lol
[20:44:32] <kurushiyama> AMDPhenomX4Q: Maybe you want to show us a sample doc and the complete query you do?
[20:45:33] <AMDPhenomX4Q> http://hastebin.com/obicuwaweq.vhdl is the actual databse structure, and the query I want to do is adding an ID to the likecounter of a comment
[20:46:26] <kurushiyama> AMDPhenomX4Q: http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/ Just in case. You overembedded, imho.
[20:46:28] <cpama> going home now. thanks everyone for your help
[20:46:45] <cpama> if i still feel inclined on monday to retry and am having troubles, i might be back.
[20:46:53] <cpama> hope that doesn't sound like a threat. ;)
[20:47:01] <kurushiyama> cpama: Good luck! If you happen to find some Roman coins: I collect them!
[20:47:22] <cpama> kurushiyama, wow that was pretty random!
[20:47:23] <cpama> :)
[20:48:52] <AMDPhenomX4Q> kurushiyama: Unfortunately I cannot change it sadly.
[20:49:19] <kurushiyama> AMDPhenomX4Q: Well, the thing is that afaik, the order of arrays is not guaranteed.
[20:50:18] <AMDPhenomX4Q> All I want to do is access a specific comment inside the comments array. At this point I don't care f it doesn't workfully.
[20:50:51] <kurushiyama> AMDPhenomX4Q: So, calm down and describe the use case in fully. On what criteria do you want to access said comment?
[20:53:22] <kurushiyama> AMDPhenomX4Q: Wait a sec, this http://hastebin.com/obicuwaweq.vhdl is all a _single_ doc?
[20:53:40] <AMDPhenomX4Q> So I have a comment ID which is the index of the comment that someone has clicked on. I need to add the user who clicked it to the likeCounter array inside that comment.
[20:53:41] <AMDPhenomX4Q> Uhh
[20:55:31] <kurushiyama> AMDPhenomX4Q: How did you generate this output? Some GUI tool?
[20:55:32] <AMDPhenomX4Q> Well FeedItems is a collection I think
[20:55:40] <AMDPhenomX4Q> Which output?
[20:55:59] <AMDPhenomX4Q> Hold on
[20:56:25] <AMDPhenomX4Q> Okay so FeedItmes is a collection
[20:56:31] <kurushiyama> Ok.
[20:57:46] <kurushiyama> So, you have a comment ID. Funny, since there is nothing similar to an ID in the comment subdocs. ;)
[20:58:36] <kurushiyama> I assume you are talking of the array index?
[20:59:25] <kurushiyama> AMDPhenomX4Q: ^
[21:00:19] <AMDPhenomX4Q> Correct kurushiyama
[21:00:39] <AMDPhenomX4Q> More or less the frontend is ReactJS and it adds an id as it maps over the collection
[21:00:54] <kurushiyama> AMDPhenomX4Q: Well, what do you use as a programming language for your backend?
[21:01:01] <AMDPhenomX4Q> Javascript backend
[21:01:05] <AMDPhenomX4Q> technically Nodejs
[21:02:25] <kurushiyama> AMDPhenomX4Q: And that is what I tried to tell you from the start: There is no guarantee in array order. It may well be that MongoDB returns the commons array in one order, and depending on what happens in node and/or react, it might have a totally different order then.
[21:03:07] <AMDPhenomX4Q> How would I return a comment without caring about the order?
[21:03:32] <kurushiyama> AMDPhenomX4Q: With a different data model, which does not rely on the rather implicit array index.
[21:05:12] <kurushiyama> AMDPhenomX4Q: you might even sort the array on output, say by date. In that case, you can halfway rely on the sort order in the backend (let's hope nobody made or deleted a comment meanwhile).
[21:05:55] <AMDPhenomX4Q> I could, but I still need to figure out how to get access to an index. I read things like comments.0 and such but those gave me errors.
[21:08:31] <AMDPhenomX4Q> So could you please see if there's a specific documentation on what I'm trying to do or what I'm doing wrong. Thanks.
[21:10:13] <kurushiyama> Well, I can not help you with node (I avoid JS as much as I possibly can). Digging through the docs because I literally never needed that.
[21:15:16] <AMDPhenomX4Q> Well, thanks anyways.
[21:16:58] <kurushiyama> AMDPhenomX4Q: I got that right that you want to add to an array to a doc inside another array, right?
[21:17:08] <AMDPhenomX4Q> yes
[21:19:57] <kurushiyama> AMDPhenomX4Q: And all you have is the array index of the subdoc? Or do you have a value of the maindoc by which you can query?
[21:22:25] <AMDPhenomX4Q> So I already have the actual feeditem
[21:22:37] <AMDPhenomX4Q> so I've failed to push the element to the mini array in various ways such as
[21:23:37] <AMDPhenomX4Q> http://hastebin.com/hexuwasevi.tex
[21:26:05] <kurushiyama> ok, the problem is that we need to identify the array item somehow, since the so called positional param "$" requires that. I think. Still tryong to find a solution.
[21:26:39] <kurushiyama> Wait, that looks different.
[21:29:02] <AMDPhenomX4Q> bless you sir or mam <3
[21:34:52] <kurushiyama> AMDPhenomX4Q: Hope this helps: http://pastebin.com/3y7v8LTx
[21:38:45] <kurushiyama> AMDPhenomX4Q: Reagrdless, your data model has several problems, not the last of them being complicated crud, as proven.
[21:39:08] <kurushiyama> AMDPhenomX4Q: You should really think about that.
[21:39:10] <AMDPhenomX4Q> I'm still working on learning your example, but yea, I can't do much about the data model.
[21:39:22] <AMDPhenomX4Q> It's someone else's control
[21:40:13] <kurushiyama> AMDPhenomX4Q: The he or she should think about it. Really. Ok, have to go to bed. That was a query I should have gotten right in seconds.
[21:40:28] <AMDPhenomX4Q> Thanks for everything
[21:50:11] <kurushiyama> AMDPhenomX4Q: You are welcome. Good night!