[07:54:15] <Boomtime> by the way, the only place where users are actually stored is in the admin database, the db named for the user is an association as part of their login information, thus there are 3 pieces for credentials; db, username, password
[07:59:27] <mroman> Boomtime: It seems to work from the mongoshell
[07:59:32] <mroman> using the mongo driver it fails horribly
[10:12:47] <kurushiyama> diegoaguilar: and 100qps can be anything from no load at all to busting a machine with 32GB of RAM on SSDs, depending on the query.
[10:13:00] <kurushiyama> diegoaguilar: I would not run anything with 1GB
[10:13:31] <kurushiyama> diegoaguilar: 2GB at least, taking into account that WT wants at least 1GB of cache in a sane env.
[10:16:11] <diegoaguilar> kurushiyama, do u know a better alternative to mongolab?
[10:19:09] <kurushiyama> diegoaguilar: Huh? Which plan @mongolab has 1GB of RAM?
[10:22:31] <kurushiyama> Thats storage, not RAM. ;)
[10:23:18] <kurushiyama> diegoaguilar: Yes, that's them
[10:27:25] <kurushiyama> diegoaguilar: However, before you choose one, you really should do some data simulations to find your expected data growth and so on. I would not choose anything before that is done. For one of my customers, we found out that is the cheapest approach to host MongoDB himself on metal and employ a dedicated admin with me as a backup.
[10:27:48] <kurushiyama> diegoaguilar: for him, that was.
[10:27:51] <diegoaguilar> well, for now is really a prototype app
[10:28:37] <kurushiyama> diegoaguilar: Gall's law ;)
[10:29:39] <diegoaguilar> so working simple system first :P
[10:30:02] <kurushiyama> diegoaguilar: Well, can still bust you. What queries? What is the average doc size / collection. What is the data growth per use case? and so on. Yes. working simple systems first.
[10:31:54] <kurushiyama> diegoaguilar: But what I meant is the corollary of Gall's law. If you have a successful simple system, it is likely to evolve into a more complex system. You should plan ahead for that.
[10:35:17] <kurushiyama> diegoaguilar: What I'd do is to try MongoLabs. If you have performance problems, you have learned something.
[10:36:39] <diegoaguilar> well I find myself in a doubt ... because as I told u, this is like a prototype, budget is not the best "now", and it seems quite cheaper to run own ec2 with 4gb of ram against first mongolab plan
[10:37:30] <diegoaguilar> they ran with mongolab sandbox for a while (lol) and it worked, but Im not sure about sandbox outages and mantainance windows
[10:37:57] <kurushiyama> diegoaguilar: Unless you take the maintenance work into account. Here is my reasoning: If it works with MongoLabs "free", you save money. redecide if not.
[10:38:11] <kurushiyama> diegoaguilar: Based on hard facts, this time, namely stats.
[10:38:53] <diegoaguilar> I just got used to a previous environtment where they paid an expensive mongohq :P
[10:39:10] <kurushiyama> diegoaguilar: And running MongoDB on EBS can be tricky.
[10:39:14] <diegoaguilar> I dont know if I can get noticed of coming maintenance periods
[10:40:58] <kurushiyama> It is by no means impossible to run MongoDB on EBS, but identifying the bottleneck can become quite tricky, especially on the cheaper instances. And provisioned IOPs arent exactly cheap.
[10:42:08] <diegoaguilar> I wish Google Compute Engine was a bit cheaper
[10:44:01] <kurushiyama> I use Openshift for about everything.
[10:44:18] <diegoaguilar> do u have a mongo 3.2 cardtrige?
[10:47:08] <kurushiyama> diegoaguilar: Was updated, already.
[10:47:41] <diegoaguilar> well having mongo at openshift, means I must run my web apps at openshift too, right?
[10:47:45] <kurushiyama> diegoaguilar: Well, I'd "credit" it to a rushed integration of the pluggable storage engines (which is just a theory of mine).
[10:54:22] <diegoaguilar> I wonder if these services like tutum
[10:54:29] <diegoaguilar> really let u run productions apps for FREE? :o
[10:55:32] <lipiec> I would like to disable warning about "WARNING: Access control is not enabled for the database." Is it possible in new mongod 3.2.5 version?
[10:55:47] <kurushiyama> Oh, I think that 0.10 is the cartridge version.
[10:57:31] <kurushiyama> mroman: HIGHLY discouraged for production.
[10:58:01] <mroman> having huge DBs and low ram was kinda the "way of life".
[10:58:31] <mroman> I don't get why I'm running into memory issues constantly when using mongodb
[10:59:29] <kurushiyama> mroman: MongoDB is _not_ aimed at that. It is optimized for doing _huge_ data on reasonable hardware, with "reasonable" defined as the "_most_ bang for the bug", not "cheapo"
[10:59:43] <mroman> You restart the machine and suddenly it crashes on startup
[11:00:12] <mroman> yeah but this sucks for reliability :)
[11:00:16] <kurushiyama> mroman: Well, maybe you should either adapt your expectations or maybe MongoDB is simply not the right tool.
[11:00:21] <mroman> somebody can just crash the whole thing with an insert
[11:03:17] <kurushiyama> diegoaguilar: I do not see the point in putting an abstraction onto docker.
[11:03:50] <kurushiyama> mroman: Well, you learned sth. ;) Do not use 32-bit build at all, and low mem envs are a good way to shoot yourself into the foot.
[11:05:25] <kurushiyama> mroman: If you want to limit resources, use something like docker
[11:05:28] <mroman> technically the whole DB would fit perfectly into an in-memory db
[11:05:54] <kurushiyama> mroman: Sorry, come again when you tried that with a 64-bit build ;P
[11:06:05] <mroman> It's just not what I'd expect from a DBs :)
[11:06:27] <kurushiyama> mroman: File a ticket, then.
[11:07:33] <mroman> I don't think the version distros ship with are still supported :p
[11:07:36] <kurushiyama> It is not that I am not concerned about this behavior, but I have to little information about OS, FS, RAM, swap and whatnot. personally, I tend to assume it is my fault, first.
[11:08:22] <kurushiyama> And until I have positively ruled that out, I blame the tool. More often than not, it was my fault, regardless of how hard I tried to rule it out.
[11:20:01] <spuz> but it returns entries where the field is empty
[11:21:34] <kurushiyama> spuz: I _think_ $exists and $ne:null are redundant (and I have to double check for the null check), because non-existent fields eval to null, anyway.
[11:22:03] <mroman> kurushiyama: no I'm not trying to do that.
[11:22:20] <spuz> kurushiyama: if i remove the exists check, i get records where the field does not exist
[11:22:29] <mroman> but the app will run on the DB machine :D
[11:29:32] <spuz> ok, the answer is this apparently: db.Collection.find({field:{$exists:1, $nin:["", null]}})
[11:29:39] <kurushiyama> mroman: In general, if you want a RDBMS, use it. But do not expect to use a new technology to behave the same as you learned from a different tech in the past.
[11:29:48] <mroman> you can't be sure unless apps writing to the DBs use transactions.
[11:30:26] <mroman> I'm only planning on using it due to the heterogenous data and most of the data being JSON
[11:30:30] <kurushiyama> mroman: SELECT1 => dump && SELECT2 =>dump == race condition between dump and SELECTS.
[11:31:36] <mroman> well of course, those selects have to run as a transaction as well :)
[11:32:06] <kurushiyama> mroman: wT writes consistent data in what is called a checkpoint: https://docs.mongodb.org/manual/core/wiredtiger/#snapshots-and-checkpoints
[11:32:32] <kurushiyama> mroman: As said before: it heavily depends on the data modelling.
[11:33:17] <kurushiyama> mroman: If you do make use of the fact that document level operations are atomic, you can have perfectly consistent states backed up.
[11:40:05] <mroman> unless you can check and subtract the money in a single step I don't see how this can possibly work.
[11:40:05] <kurushiyama> mroman: Sure. You can. Ever wondered why it sometimes takes time at the ATM. Because that is exactly what happens.
[11:40:36] <mroman> because you need to make sure that no subtraction is going on after you checked the balance
[11:41:37] <kurushiyama> mroman: What usually happens is that the account balance at a certain point in time is calculated and saved (once a day, iirc), and the differences are aggregated from a log, applied to the balance and the result is taken,
[11:42:07] <kurushiyama> mroman: Sure. And this works. Because the log is _atomic_
[11:42:41] <mroman> Yes, that's what I said. "unless you consolidate the transfers at some time"
[11:48:48] <kurushiyama> mroman: We are mixing use cases here.
[11:49:25] <mroman> My use case is a fully concurrent environment where everything can happen at every time.
[11:49:32] <kurushiyama> mroman: The example I gave so far was just for account balancing.
[11:49:44] <kurushiyama> mroman: Now, let's add the withdrawal use case.
[11:50:02] <kurushiyama> mroman: You have some options here.
[11:50:21] <mroman> yes, you can insert the transfer anyway and check the validity of transfers when getting the current balance :)
[11:50:24] <kurushiyama> mroman: Personally, I'd suggest a 2-phase commit.
[11:50:50] <kurushiyama> mroman: however, optimistic locking would be an option, too.
[11:53:09] <kurushiyama> mroman: But in general, what this example was supposed to illustrate was that you can get consistent states in your database with carefull data modelling.
[11:58:04] <mroman> well first I need to get it not to crash so often :)
[12:02:34] <kurushiyama> mroman: Advice: use a 64bit build, do not deploy MongoDB with your app on the same server for production, have some swap, monitor for enabling you to make _early_ decisions.
[15:42:57] <mylord> how can I do this? db.users.find("this.gain" != null && "this.gain" < "this.withdrawTotal")
[15:43:08] <mylord> I get this: E QUERY [thread1] Error: don't know how to massage : boolean :
[15:44:15] <mylord> or this: b.users.find({"this.gain":{$exists:true}, $where: "this.gain" < "this.withdrawTotal"})
[16:49:56] <StephenLynx> because I actually handle undefined
[16:50:37] <StephenLynx> otherwise there would be a couple of sites with a pretty big hole in their authentication :^)
[16:54:10] <kurushiyama> StephenLynx: http://pastebin.com/CXw3NdwC would be the solution, then, maybe with an early match on $ne:null for either field.
[16:58:05] <kurushiyama> Not sure wether $redact utilizes indices, though. It is being described as a combination of $match and $project, but I guess that is semantic, not technical.
[17:02:22] <kurushiyama> And I am actually not sure wether this performs better than a $where
[17:33:28] <StephenLynx> IMO, that's all too complicated. I would just use application code and call it a day
[17:45:41] <kurushiyama> StephenLynx: Well, I guess it depends on how many docs we are talking about. A few k? Might work. A few M? Well, not so much.
[17:46:05] <StephenLynx> i dunno, if are that many, then your whole server probably got bogged down.
[17:47:08] <kurushiyama> I am just happy that this is none of my use cases. Way to many "if","possibly","maybe", "could be" to my taste.
[19:01:37] <johnnyfive> Howdy. I'm having an issue with authentication. I enabled authentication, and I have a user with the "root" role. However when trying to login, I get "[conn9] SCRAM-SHA-1 authentication failed for admin on admin from client 127.0.0.1 ; TypeMismatch: Roles must be objects."
[19:03:19] <kurushiyama> johnnyfive: can you show us how you try to login?
[19:03:19] <johnnyfive> The entry for the user is the following: { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, etc..} }, "roles" : [ "root" ] }
[19:04:05] <johnnyfive> kurushiyama| i'm using MongoChef, but I can try on the local shell too. Sec
[19:10:32] <johnnyfive> ugh, from the shell it just tells me authentication failed. This is a mess
[19:16:28] <johnnyfive> well I was using robomongo exclusively, but I can't figure out this roles issue so was hoping a gui tool might be able to fix this mess, but so far that's also a no go.
[19:17:07] <StephenLynx> yeah, GUI tools usually are bad for mongo
[19:17:33] <kurushiyama> Haven't seen one I trust, tbh.
[19:17:43] <johnnyfive> so any of you have the line to create a user with all permissions on the box?
[19:17:51] <johnnyfive> or willing to help me make one?
[19:21:25] <kurushiyama> johnnyfive: You need to authenticate against the database where the user is defined. But with that role, you have said permissions.
[19:21:38] <johnnyfive> oh fml, that's what i'm missing
[19:23:37] <johnnyfive> man that's not intuitive. but got it to work, thanks
[19:24:15] <johnnyfive> In fact the examples in the docs are wrong: https://docs.mongodb.org/v2.6/reference/method/db.createUser/#db.createUser
[19:24:41] <johnnyfive> oh that's cause it's 2.6. my bad. Thanks guys
[19:28:30] <kurushiyama> johnnyfive: Again: rather be safe and ask than simply do and be sorry ;)
[19:52:45] <cpama> hi all. just wondering if someone can help me out with this post: http://stackoverflow.com/questions/36651282/cant-start-mongodb-on-ubuntu?noredirect=1#comment60899882_36651282
[19:53:32] <cpama> if it is true that there is no support for ubuntu 15... maybe I have to set up a vm with an older version of ubuntu
[19:53:53] <cheeser> ubuntu 15 is not officially supported, no.
[20:21:20] <kurushiyama> Well, the first computer my little boy will get will be in parts. And he'll get the tools to plug it together. Plus a Linux boot CD. Alpine, if he was a bad boy. ;)
[20:22:08] <cheeser> as much as i love putting computers together, there's not much room in my brooklyn apartment for that kind of shenanigans. ;)
[20:23:10] <cagmz> I started using linux at around 12. I was on AOL chat and asked a local LUG member for a live cd and he mailed several to me (xubuntu, lubuntu, ubuntu desk/server). it was ubuntu 5 i think
[20:23:32] <kurushiyama> cheeser: I see. Well, we live in a very small flat by choice. Small Home Movement, smaller footprint and such.
[20:24:21] <kurushiyama> cagmz: Go and learn docker, asap! ;)
[20:24:55] <kurushiyama> cagmz: Or, even better: Learn Go and Docker, asap! ;P
[20:27:13] <cagmz> is docker useful for personal use? it seems very enterprise
[20:30:28] <StephenLynx> a runtime environment, a database
[20:30:48] <StephenLynx> docker has that whole layer of things on top of emulating a computer
[20:32:00] <kurushiyama> StephenLynx: Not quite. Actually, it is paravirtualization and a good way of enforcing resource separation. Aside from the fact that it can be used to eliminate Dev's ultimate lie. ;)
[20:32:27] <StephenLynx> why would I bother about resource separation?
[20:32:44] <StephenLynx> and what's dev ultimate lie?
[20:33:14] <StephenLynx> this is why you configure the VM the way the production environment should be.
[20:33:31] <StephenLynx> so you don't have any surprises on deploy.
[20:34:13] <cagmz> so with docker, an os is installed once, and docker containers share the same OS? compared to running N vm's, where each has it's own OS
[20:34:40] <kurushiyama> StephenLynx: Ignoring configuration, integration and other things? I am not talking of smallish things. I talk of systems taking 3 days for install, if each part is done manually.
[20:34:57] <kurushiyama> cagmz: Not quite. It uses the same kernel.
[20:35:03] <StephenLynx> why would it take 3 days to install?
[20:36:00] <kurushiyama> StephenLynx: We had a distributed system of like 15 different node types, to be run on a customers cloud which, to put it polite, was not exactly adhereing to standards.
[20:36:15] <cpama> drats. i feel like a missed a good chunk of an interesting conversation here.
[20:36:27] <cpama> just enabled VT and started my box
[20:36:42] <cpama> now going to try to create vm using 64 bit version of 14.04
[20:36:42] <kurushiyama> StephenLynx: Moved to Docker, and we reduced those times to a few hours.
[20:36:59] <kurushiyama> cpama: With Docker? or a real VM?
[20:37:09] <cpama> well, i'm going to take baby steps
[20:37:19] <cpama> going to try with what i have ... which is virtualbox
[20:37:34] <cpama> and then i will try out docker on a day when i have more time to play around.
[20:37:47] <cpama> don't feel like exerting too much brain power
[20:38:07] <jr3> if I have an arrays of object in a doc and I index that array does that help with property searchs on that array or is it still going to be collection scan
[20:38:18] <kurushiyama> StephenLynx: Don't ask why we needed to have 15 node types. It was due to some obscure supposedly legal requirements of the customer.
[20:41:09] <AMDPhenomX4Q> Hello, I need to access a property of an object which is an array, then access a property at a specific index of an array. I've been stuck at the array itself for a few hours and haven't figured out how to access a specific index of it. Any suggestions?
[20:42:15] <cpama> arg. ok so I downloaded ubuntu-14.04.4-desktop-amd64.iso and tried to set up the vm. still getting error "This kernel requires an x-64-64 CPU, but only detected an i686CPU"
[20:42:28] <cpama> now going to try again with 32 bit iso
[20:43:35] <AMDPhenomX4Q> I can't but it must be something easy. I have this code http://hastebin.com/hugakisozo.css and so I have the array. Basically I've been stuck at accessing an index of it.
[20:43:50] <kurushiyama> cpama: What machine do you have? Do not get me wrong, but it sounds a bit like archeology...
[20:44:32] <kurushiyama> AMDPhenomX4Q: Maybe you want to show us a sample doc and the complete query you do?
[20:45:33] <AMDPhenomX4Q> http://hastebin.com/obicuwaweq.vhdl is the actual databse structure, and the query I want to do is adding an ID to the likecounter of a comment
[20:46:26] <kurushiyama> AMDPhenomX4Q: http://blog.mahlberg.io/blog/2015/11/05/data-modelling-for-mongodb/ Just in case. You overembedded, imho.
[20:46:28] <cpama> going home now. thanks everyone for your help
[20:46:45] <cpama> if i still feel inclined on monday to retry and am having troubles, i might be back.
[20:46:53] <cpama> hope that doesn't sound like a threat. ;)
[20:47:01] <kurushiyama> cpama: Good luck! If you happen to find some Roman coins: I collect them!
[20:47:22] <cpama> kurushiyama, wow that was pretty random!
[20:48:52] <AMDPhenomX4Q> kurushiyama: Unfortunately I cannot change it sadly.
[20:49:19] <kurushiyama> AMDPhenomX4Q: Well, the thing is that afaik, the order of arrays is not guaranteed.
[20:50:18] <AMDPhenomX4Q> All I want to do is access a specific comment inside the comments array. At this point I don't care f it doesn't workfully.
[20:50:51] <kurushiyama> AMDPhenomX4Q: So, calm down and describe the use case in fully. On what criteria do you want to access said comment?
[20:53:22] <kurushiyama> AMDPhenomX4Q: Wait a sec, this http://hastebin.com/obicuwaweq.vhdl is all a _single_ doc?
[20:53:40] <AMDPhenomX4Q> So I have a comment ID which is the index of the comment that someone has clicked on. I need to add the user who clicked it to the likeCounter array inside that comment.
[21:02:25] <kurushiyama> AMDPhenomX4Q: And that is what I tried to tell you from the start: There is no guarantee in array order. It may well be that MongoDB returns the commons array in one order, and depending on what happens in node and/or react, it might have a totally different order then.
[21:03:07] <AMDPhenomX4Q> How would I return a comment without caring about the order?
[21:03:32] <kurushiyama> AMDPhenomX4Q: With a different data model, which does not rely on the rather implicit array index.
[21:05:12] <kurushiyama> AMDPhenomX4Q: you might even sort the array on output, say by date. In that case, you can halfway rely on the sort order in the backend (let's hope nobody made or deleted a comment meanwhile).
[21:05:55] <AMDPhenomX4Q> I could, but I still need to figure out how to get access to an index. I read things like comments.0 and such but those gave me errors.
[21:08:31] <AMDPhenomX4Q> So could you please see if there's a specific documentation on what I'm trying to do or what I'm doing wrong. Thanks.
[21:10:13] <kurushiyama> Well, I can not help you with node (I avoid JS as much as I possibly can). Digging through the docs because I literally never needed that.
[21:19:57] <kurushiyama> AMDPhenomX4Q: And all you have is the array index of the subdoc? Or do you have a value of the maindoc by which you can query?
[21:22:25] <AMDPhenomX4Q> So I already have the actual feeditem
[21:22:37] <AMDPhenomX4Q> so I've failed to push the element to the mini array in various ways such as
[21:26:05] <kurushiyama> ok, the problem is that we need to identify the array item somehow, since the so called positional param "$" requires that. I think. Still tryong to find a solution.
[21:26:39] <kurushiyama> Wait, that looks different.
[21:34:52] <kurushiyama> AMDPhenomX4Q: Hope this helps: http://pastebin.com/3y7v8LTx
[21:38:45] <kurushiyama> AMDPhenomX4Q: Reagrdless, your data model has several problems, not the last of them being complicated crud, as proven.
[21:39:08] <kurushiyama> AMDPhenomX4Q: You should really think about that.
[21:39:10] <AMDPhenomX4Q> I'm still working on learning your example, but yea, I can't do much about the data model.
[21:39:22] <AMDPhenomX4Q> It's someone else's control
[21:40:13] <kurushiyama> AMDPhenomX4Q: The he or she should think about it. Really. Ok, have to go to bed. That was a query I should have gotten right in seconds.