[00:58:11] <DrPeeper> That is a piece of the document I have in mongoDB. when I try to db.alertElements.ensureIndex( {'alert.info.area.polygon':'2dsphere'}); it throws an error of: "errmsg" : "Can't extract geo keys.
[00:59:37] <DrPeeper> I'm trying to set a geospatial index
[01:00:11] <DrPeeper> oh here's a piece of the puzzle: longitude/latitude is out of bounds, lng: 47.56 lat: -121.96",
[09:03:28] <silmeth> Are there any Morphia experts/devs here? :) I have a question: if there is a mapped class, which @Embedds another class objects – does Morphia object need to map those embedded classes, or will it map them automatically, following @Embedded annotations?
[10:44:38] <silmeth> OK, from what I see, it seems that Morphia can map Embedded classes all by itself, that’s great. :)
[11:16:50] <ddowlingACT> It works on 2.6, but not 2.7 or 3, though I was under the impression that the localhost exception was still in place
[11:16:55] <ddowlingACT> 10:54:46.846 [main] DEBUG org.mongodb.driver.protocol.command - Sending command {createUser : BsonString{value='testUser'}} to database admin on connection [connectionId{localValue:2, serverValue:2}] to server localhost:12346
[11:16:57] <ddowlingACT> 2016-02-19T10:54:46.849+0000 I ACCESS [conn2] Unauthorized not authorized on admin to execute command { createUser: "testUser", pwd: "xxx", roles: [ { role: "dbOwner", db: "admin" }, { role: "dbOwner", db: "catalogue-service" } ] }
[11:17:16] <kurushiyama> ddowlingACT: And dont paste output ;)
[11:17:39] <ddowlingACT> I cannot get to pastebin or the like from work, so I have no other way of showing output :(
[11:17:42] <kurushiyama> ddowlingACT: So, one step after the other. You enabled auth and created a user?
[11:19:03] <ddowlingACT> I start a mongo with auth enabled. Then, I run the command to create a user in the admin database. This is all done programatically with MongoClient.getDatabase().runCommand()
[11:19:58] <kurushiyama> Ok, here is the problem, if I get it right: After the first user is created, there is no localhost exception any more.
[11:20:18] <ddowlingACT> Yes, I understand that, but this is the command to create the first user
[11:20:29] <ddowlingACT> There are no users prior to the command being run
[11:20:37] <kurushiyama> Hence, the first user to be created should have root rights or userAdminAnyDatabase
[11:20:54] <ddowlingACT> ah, so the roles required have changed?
[11:21:30] <kurushiyama> ddowlingACT: I actually do not know, But it might well be that mongod is smart enough here.
[11:21:55] <kurushiyama> ddowlingACT: I'd try to create an admin user first, and then the dbadmin user.
[11:22:52] <kurushiyama> ddowlingACT: Oh, btw: for a user only having roles in a single DB, the use of the admin database creating that user is not really reasonable.
[11:23:30] <ddowlingACT> kurushiyama: It's part of a legacy support from when you could only make users in the admin database
[11:24:53] <ddowlingACT> I'm not sure what version we were supporting at the start...
[11:25:40] <ddowlingACT> But this is test code and does not reflect the actual usage, where users will be created manually
[11:26:01] <kurushiyama> ddowlingACT: If it isnt 2.2 (and even there I am not sure), users are tied to _a_ database, but not necessarily "admin"
[11:26:51] <kurushiyama> ddowlingACT: Ok, back to your prob. Try to create a user with root or userAdminAnyDatabase before adding the dbadmin user.
[11:30:07] <kurushiyama> ddowlingACT: Just checked. Even in 2.2, users in general were tied to _a_ database.
[11:30:20] <ddowlingACT> Okay, I can create the user now, thanks. Just have to figure out Failed to authenticate testUser@admin with mechanism MONGODB-CR: AuthenticationFailed MONGODB-CR credentials missing in the user document
[11:30:32] <ddowlingACT> I guess I can't auth in the same way in 3
[11:31:34] <ddowlingACT> kurushiyama: It may just have been the tutorial we used, or another dependency. I really can't remember, this is code I haven't touched in a year
[11:31:37] <kurushiyama> ddowlingACT: It is configurable which auth mechanism should be used.
[11:32:08] <kurushiyama> ddowlingACT: So, for your product, you should change it.
[11:34:02] <ddowlingACT> kurushiyama: Okay, I'll look into that
[11:36:36] <ddowlingACT> Ah, I think I am trying to use MONGODB-CR when new users only generate SCRAM-SHA-1 credentials in 3.0+ ?
[12:01:27] <ddowlingACT> kurushiyama: It essentially allows a managed mongo process for testing
[12:01:32] <kurushiyama> Ben_1: Indices are designed to live in memory and point to the data, which might or might not be in memory.
[12:02:16] <ddowlingACT> kurushiyama: but it runs it up with a temp location... and I'm not entirely sure how one would start it, stop it, change the command line args, and start it again!
[12:02:25] <Ben_1> kurushiyama: that's true but if I use other attributes as index I don't need _id as an index and 33million entries need a lot of memory just indexing _id
[12:03:01] <kurushiyama> Ben_1: The short answer: so make good use of _id ;)
[12:05:50] <Ben_1> after inserting 33million entries several gigabytes are still in use
[12:06:10] <kurushiyama> Ben_1: Used RAM is good RAM
[12:06:11] <Ben_1> after stopping mongod they are freed
[12:06:25] <kurushiyama> Ben_1: Do you use wiredTiger?
[12:06:31] <Ben_1> kurushiyama: not if I wanna start another insert process of 33 million entries :P
[12:06:42] <kurushiyama> Ben_1: Actually, very much so
[12:07:20] <Ben_1> kurushiyama: never heard of wiredTiger so no, not intentionally
[12:07:33] <kurushiyama> Ben_1: Ok, answer my questions, please. A) Do you use WT? B) does the system swap, C) Which OS, D) How big is your overall data?
[12:09:10] <Ben_1> A.) I don't know but I don't think so B.) there is a swap partition so I think yes C.) fedora 22 D.) about 4 or 5 gigabyte, but I can look if you want the specific amount
[12:10:42] <Ben_1> if it's enabled per default I use it
[12:12:22] <kurushiyama> Ok, having a swap partition does not nexessarily mean you swap. There is a paradigm in UNIX/(GNU-)Linux: Put as much resources as possible to _good_ use. Unlike Windows, which swaps pages from physical RAM although there is plenty left.
[12:13:18] <kurushiyama> So, WT takes 50% of the available RAM for it's (badly named) cache.
[12:13:35] <Ben_1> I have 4 or 6 gigabytes of ram left so I think fedora will not swap
[12:13:39] <kurushiyama> Plus, there are indices, which for performance reasons are kept in RAM, too.
[12:14:44] <Ben_1> now I start the insert process again and I show you the memory workload
[12:14:48] <kurushiyama> So, why the heck do you want to fiddle with your RAM? Premature optimization is the root of all evil, as Donald Knuth correctly stated once.
[12:15:22] <kurushiyama> Especially on a Workstation?
[12:16:15] <ddowlingACT> kurushiyama: Thanks for your time, that seems to have worked!
[12:17:59] <Ben_1> kurushiyama: because this is a development machine and the future production machine should use as less RAM as possible. And as you said, it is not normal that mongod holds gigabytes of RAM minutes after the insertion process
[12:19:13] <Ben_1> that's why I'm searching for a bug in my applications database layer that causes this problem
[12:19:39] <kurushiyama> Ben_1: Wrong approach. I get worried if my RAM utilization drops below 75%
[12:20:12] <kurushiyama> Ben_1: It is because.... of the working set.
[12:20:17] <Ben_1> kurushiyama: what's that for an attitude xD? If this RAM is not really needed it should not be used
[12:20:29] <Ben_1> other applications could use my free memory
[12:20:33] <kurushiyama> Ben_1: Knowing how MongoDB works, it should
[12:20:55] <Ben_1> you said it is not normal for mongodb holding so much memory
[12:21:18] <Ben_1> and if it's not normal there is a bug somewhere
[12:21:23] <kurushiyama> Ben_1: Nope. MongoDB should always run on decicated machines. If you do not adhere to this principle, it is a concious decision, and you need to live with the drawbacks
[12:21:34] <kurushiyama> And I liketo use what I pay for.
[12:22:18] <kurushiyama> Ben_1: If your dimensions are right, the usual utilization of RAM is between 80 and 95%
[12:24:39] <kurushiyama> Ben_1: You are trying to optimize a system you do not really understand. My advice is: develop your application, follow MongoDB best practises and let it take care of itself.
[12:25:28] <kurushiyama> Ben_1: I have some collections which contain billions (note the s) of documents. Following your logic, those cant even exist.
[12:26:40] <Ben_1> kurushiyama: that's why I try to unserstand this behavior :P
[12:26:59] <kurushiyama> Ben_1: take a class in MongoU, then. A good starting point.
[12:28:02] <kurushiyama> Ben_1: Advanced deployment and operations is where you really learn it, but I suggest taking MongoDB for DBAs first, and the developer class suiting your programmming language.
[12:49:56] <basiclaser> hi all, `can't create ActualPath object from path` appears when i try to mongorestore a docker mongo instance. any ideas?
[12:50:40] <basiclaser> normally i can run mongorestore at the root directory of the mongo DB dump folder, and it automatically restores from there.
[14:33:41] <sedavand> what do they call MongoRegex-class in the new PHP mongodb lib?
[14:55:21] <zell> hmmm, I think while I upgraded my mongodb version I overrided my database
[14:56:22] <kurushiyama> zell: That never happened, and I did a couple of hundred updates so far. What did you exactly do and what are the problems you experience.
[14:56:54] <zell> kurushiyama: I switch my deployment from capistrano to ansible
[14:57:36] <zell> kurushiyama: I'm using https://github.com/UnderGreen/ansible-role-mongodb
[14:57:40] <kurushiyama> zell: Which is not really a MongoDB problem ;)
[15:01:39] <kurushiyama> zell: Nope. Contribute to a project of choice. If you can not, make a donation to a charitable organisation of choice. If you can't: np. yvw.
[16:20:59] <kurushiyama> nils_: Sure. In a script. https://gist.github.com/mwmahlberg/dc128d557c0c3c13a0ec
[16:21:28] <kurushiyama> nils_: very crude version, but works
[16:22:07] <nils_> yeah it's not really ideal, probably best to leave the burden of authenticating up to the admin then.
[16:23:28] <kurushiyama> nils_: either way, you store the credentials somewhere. Not much difference to a dot file
[16:27:33] <nils_> yeah, I could of course then alias mongo to that script or something, however that would then potentially leave the credentials visible in ps...
[18:25:59] <Ben_1> StephenLynx: I have entries for devices with attributes like IP, Serial, label etc (about 5000 device entries). Seperated from them I have millions of sensor entries and I want to associate this entries to their related device. Is this reasonable in mongodb?
[18:26:17] <StephenLynx> what kind of relation is that?
[18:33:29] <Ben_1> StephenLynx: dunno some kind of storing the objectID of that device I think
[19:05:44] <dlam> hey how would you go about "make mongodb not use a lot of memory?" (im provisioning a server and know almost nothing about mongo)
[19:06:06] <dlam> maybe theres like a .conf file or --max-memory argument or something
[22:29:14] <kurushiyama> I have a funky problem. On the shell, not a single collection of a database is shown any more. But: the data is acessible
[22:29:57] <kurushiyama> Engine is WT, version 3.0.9
[22:30:18] <StephenLynx> you are on the wrong database
[22:30:34] <synapse> Boomtime I put the db from mongodump into the db folder but when i log in to the shell it's not there is there something else I need to do to make mongo see the dbs?
[22:31:13] <Boomtime> it is the opposite of mongodump
[22:32:00] <synapse> I'm importing it to a windows mongo
[22:32:23] <Boomtime> given that the storage engine can change it is not a good idea to depend on the dbpath files - mongodump/mongorestore is dependable
[22:32:48] <synapse> meaning it wont work due to path incompatibilities?
[22:34:52] <synapse> mongoexport/mongoimport might be better
[22:37:36] <StephenLynx> because dump actually outputs bson
[22:37:53] <StephenLynx> so it is able to output formats that are not possible for export to.
[22:39:14] <kurushiyama> StephenLynx: I am certainly not. And the collecions arent shown in compass either. And in the wrong database, the queries work reragdless of the fact that the collections arent shown
[22:39:33] <kurushiyama> "wrong" database, of course
[22:39:57] <StephenLynx> show collection doesn`t give you anything?
[22:45:50] <kurushiyama> Thanks, Boomtime and StephenLynx, I owe you one
[22:47:16] <Boomtime> kurushiyama: beer, i accept beerpal credits.. somebody needs to invent that
[22:54:59] <Tachaikowsky> can I get some help on making this work? https://www.npmjs.com/package/mongoose-friends I am using it on my node.js and keep getting callback not defined
[22:56:20] <StephenLynx> yeah, mongoose is bugged as hell
[22:56:32] <StephenLynx> I try to warn people to avoid it like the plague over #mongodb
[22:56:38] <StephenLynx> oh wait, this is #mongodb
[22:56:52] <Tachaikowsky> LOL, you are funny. But this is just me being ignorant
[23:07:28] <cruisibesares> hey does anyone know off hand if you can run a real set and hang slaves off of that set? Im having issues with putting all the nodes in the repl set for several reasons and i would love to be able to have an ha setup in my core and hang readonly slaves off the ends that can come and go without the master knowing
[23:12:57] <cruisibesares> i found this from a while back and it seems like the answer is no
[23:31:30] <synapse> is anyone here familiar with the example restaurants dataset supplied on the mongo intro page/
[23:31:46] <synapse> Well as you will know it's very large
[23:33:55] <synapse> I have an AJAX and a query using regex which searches a field using a "contains" pattern match (so is quite taxing) In fact if I do more than 4 queries rapidly it kills the entire server. The results are enormous as a query occurs on each keyup. Besides from some client side limiting, what should be done server side? and is such a thing normal?
[23:35:33] <synapse> I know it's friday night and most normal people are out enjoying life and I'm sat here breaking my mongo server
[23:35:46] <synapse> But there must be at least one person as sad as me?