PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 19th of February, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:34:38] <DrPeeper> my raid card didn't come in :(
[00:37:37] <DrPeeper> RAWR!
[00:52:31] <DrPeeper> am I missing something? is there something blatantly wrong with this: http://pastebin.com/pVyh9wx4
[00:57:20] <joannac> DrPeeper: umm, context?
[00:57:38] <DrPeeper> hahaha yeah, I don't do the whole "context" thing
[00:57:45] <DrPeeper> :D :D
[00:58:11] <DrPeeper> That is a piece of the document I have in mongoDB. when I try to db.alertElements.ensureIndex( {'alert.info.area.polygon':'2dsphere'}); it throws an error of: "errmsg" : "Can't extract geo keys.
[00:59:37] <DrPeeper> I'm trying to set a geospatial index
[01:00:11] <DrPeeper> oh here's a piece of the puzzle: longitude/latitude is out of bounds, lng: 47.56 lat: -121.96",
[01:00:15] <DrPeeper> :/
[01:00:47] <joannac> "The coordinate order is longitude, then latitude."
[01:00:51] <DrPeeper> yes
[01:02:48] <DrPeeper> I swapped on accident :)
[01:03:04] <DrPeeper> standard WGS84
[01:03:33] <DrPeeper> thanks joannac
[01:08:15] <DrPeeper> new error! oh, it seemed to work now! :D
[01:08:33] <DrPeeper> yay!
[02:11:44] <Trinity> what is the operator for something like... "id" is not [object, object, object]
[02:11:51] <Trinity> where object is id values?
[02:12:31] <joannac> $nin ?
[02:16:19] <Trinity> ill check it out thanks
[02:16:42] <Trinity> awesome it is. thanks joannac :D
[03:04:40] <SirFunk> Is there a mongodb conference going on this weekend?
[03:05:28] <SirFunk> Ahh, Elastic{ON} :) There are signs all over the hotel I'm at
[09:01:25] <silmeth> hi
[09:03:28] <silmeth> Are there any Morphia experts/devs here? :) I have a question: if there is a mapped class, which @Embedds another class objects – does Morphia object need to map those embedded classes, or will it map them automatically, following @Embedded annotations?
[10:44:38] <silmeth> OK, from what I see, it seems that Morphia can map Embedded classes all by itself, that’s great. :)
[11:14:44] <ddowlingACT> Hello
[11:15:08] <kurushiyama> ddowlingACT: Hoi!
[11:15:33] <ddowlingACT> I was wondering if anyone could help me, having a slight problem with setting up an admin user.
[11:16:36] <kurushiyama> ddowlingACT: Dont ask to ask, just ask.
[11:16:49] <kurushiyama> ddowlingACT: ;)
[11:16:50] <ddowlingACT> It works on 2.6, but not 2.7 or 3, though I was under the impression that the localhost exception was still in place
[11:16:55] <ddowlingACT> 10:54:46.846 [main] DEBUG org.mongodb.driver.protocol.command - Sending command {createUser : BsonString{value='testUser'}} to database admin on connection [connectionId{localValue:2, serverValue:2}] to server localhost:12346
[11:16:57] <ddowlingACT> 2016-02-19T10:54:46.849+0000 I ACCESS [conn2] Unauthorized not authorized on admin to execute command { createUser: "testUser", pwd: "xxx", roles: [ { role: "dbOwner", db: "admin" }, { role: "dbOwner", db: "catalogue-service" } ] }
[11:17:16] <kurushiyama> ddowlingACT: And dont paste output ;)
[11:17:39] <ddowlingACT> I cannot get to pastebin or the like from work, so I have no other way of showing output :(
[11:17:42] <kurushiyama> ddowlingACT: So, one step after the other. You enabled auth and created a user?
[11:19:03] <ddowlingACT> I start a mongo with auth enabled. Then, I run the command to create a user in the admin database. This is all done programatically with MongoClient.getDatabase().runCommand()
[11:19:58] <kurushiyama> Ok, here is the problem, if I get it right: After the first user is created, there is no localhost exception any more.
[11:20:18] <ddowlingACT> Yes, I understand that, but this is the command to create the first user
[11:20:29] <ddowlingACT> There are no users prior to the command being run
[11:20:37] <kurushiyama> Hence, the first user to be created should have root rights or userAdminAnyDatabase
[11:20:54] <ddowlingACT> ah, so the roles required have changed?
[11:21:30] <kurushiyama> ddowlingACT: I actually do not know, But it might well be that mongod is smart enough here.
[11:21:55] <kurushiyama> ddowlingACT: I'd try to create an admin user first, and then the dbadmin user.
[11:22:52] <kurushiyama> ddowlingACT: Oh, btw: for a user only having roles in a single DB, the use of the admin database creating that user is not really reasonable.
[11:23:30] <ddowlingACT> kurushiyama: It's part of a legacy support from when you could only make users in the admin database
[11:23:49] <kurushiyama> ddowlingACT: In 2.6?
[11:24:26] <ddowlingACT> kurushiyama: Yes? I think...
[11:24:32] <kurushiyama> ddowlingACT: Nope
[11:24:53] <ddowlingACT> I'm not sure what version we were supporting at the start...
[11:25:40] <ddowlingACT> But this is test code and does not reflect the actual usage, where users will be created manually
[11:26:01] <kurushiyama> ddowlingACT: If it isnt 2.2 (and even there I am not sure), users are tied to _a_ database, but not necessarily "admin"
[11:26:51] <kurushiyama> ddowlingACT: Ok, back to your prob. Try to create a user with root or userAdminAnyDatabase before adding the dbadmin user.
[11:30:07] <kurushiyama> ddowlingACT: Just checked. Even in 2.2, users in general were tied to _a_ database.
[11:30:20] <ddowlingACT> Okay, I can create the user now, thanks. Just have to figure out Failed to authenticate testUser@admin with mechanism MONGODB-CR: AuthenticationFailed MONGODB-CR credentials missing in the user document
[11:30:32] <ddowlingACT> I guess I can't auth in the same way in 3
[11:31:06] <kurushiyama> ddowlingACT: Oh, you can
[11:31:34] <ddowlingACT> kurushiyama: It may just have been the tutorial we used, or another dependency. I really can't remember, this is code I haven't touched in a year
[11:31:37] <kurushiyama> ddowlingACT: It is configurable which auth mechanism should be used.
[11:32:08] <kurushiyama> ddowlingACT: So, for your product, you should change it.
[11:34:02] <ddowlingACT> kurushiyama: Okay, I'll look into that
[11:36:36] <ddowlingACT> Ah, I think I am trying to use MONGODB-CR when new users only generate SCRAM-SHA-1 credentials in 3.0+ ?
[11:52:42] <Ben_1> good morning
[11:53:42] <kurushiyama> ddowlingACT: There should be a possibility to use the old authentication schema.
[11:53:45] <ddowlingACT> kurushiyama: I think I will need to use the first user to create a second user with credentials for the required databases?
[11:53:49] <kurushiyama> Ben_1: Good morning.
[11:53:59] <kurushiyama> ddowlingACT: Correct.
[11:54:46] <ddowlingACT> kurushiyama: As I am now prevented from creating the first admin with the correct credentials in one operation?
[11:58:18] <Ben_1> Is there a way to not store the _id index in memory?
[11:58:53] <kurushiyama> ddowlingACT: Restart without auth enabled. Ben_1: no
[11:59:43] <ddowlingACT> kurushiyama: Trickier than it sounds with this embedded mongo I'm using...
[11:59:58] <ddowlingACT> Looks like I can now create the correct user though! :D
[12:00:44] <kurushiyama> ddowlingACT: Embedded Mongo?...
[12:01:06] <ddowlingACT> kurushiyama: de.flapdoodle.embed.mongo
[12:01:27] <ddowlingACT> kurushiyama: It essentially allows a managed mongo process for testing
[12:01:32] <kurushiyama> Ben_1: Indices are designed to live in memory and point to the data, which might or might not be in memory.
[12:02:16] <ddowlingACT> kurushiyama: but it runs it up with a temp location... and I'm not entirely sure how one would start it, stop it, change the command line args, and start it again!
[12:02:25] <Ben_1> kurushiyama: that's true but if I use other attributes as index I don't need _id as an index and 33million entries need a lot of memory just indexing _id
[12:03:01] <kurushiyama> Ben_1: The short answer: so make good use of _id ;)
[12:03:52] <kurushiyama> Ben_1: 33m*12b
[12:05:10] <Ben_1> kurushiyama: mh you're right it seems it's something else that needs gigabytes of memory
[12:05:23] <kurushiyama> Ben_1: The long answer: Find a field suitable to make the _id.
[12:05:29] <kurushiyama> Ben_1: WT?
[12:05:50] <Ben_1> after inserting 33million entries several gigabytes are still in use
[12:06:10] <kurushiyama> Ben_1: Used RAM is good RAM
[12:06:11] <Ben_1> after stopping mongod they are freed
[12:06:25] <kurushiyama> Ben_1: Do you use wiredTiger?
[12:06:31] <Ben_1> kurushiyama: not if I wanna start another insert process of 33 million entries :P
[12:06:42] <kurushiyama> Ben_1: Actually, very much so
[12:07:20] <Ben_1> kurushiyama: never heard of wiredTiger so no, not intentionally
[12:07:33] <kurushiyama> Ben_1: Ok, answer my questions, please. A) Do you use WT? B) does the system swap, C) Which OS, D) How big is your overall data?
[12:07:42] <kurushiyama> Ben_1: 3.2?
[12:09:10] <Ben_1> A.) I don't know but I don't think so B.) there is a swap partition so I think yes C.) fedora 22 D.) about 4 or 5 gigabyte, but I can look if you want the specific amount
[12:09:21] <Ben_1> and I'm using mongoDB 3.2.3
[12:09:49] <Ben_1> found something in the documentation about wiredtiger
[12:09:52] <Ben_1> let me read it
[12:10:19] <kurushiyama> Ben_1: You use WT
[12:10:31] <Ben_1> kurushiyama: lol ok :P
[12:10:42] <Ben_1> if it's enabled per default I use it
[12:12:22] <kurushiyama> Ok, having a swap partition does not nexessarily mean you swap. There is a paradigm in UNIX/(GNU-)Linux: Put as much resources as possible to _good_ use. Unlike Windows, which swaps pages from physical RAM although there is plenty left.
[12:13:18] <kurushiyama> So, WT takes 50% of the available RAM for it's (badly named) cache.
[12:13:35] <Ben_1> I have 4 or 6 gigabytes of ram left so I think fedora will not swap
[12:13:39] <kurushiyama> Plus, there are indices, which for performance reasons are kept in RAM, too.
[12:14:44] <Ben_1> now I start the insert process again and I show you the memory workload
[12:14:48] <kurushiyama> So, why the heck do you want to fiddle with your RAM? Premature optimization is the root of all evil, as Donald Knuth correctly stated once.
[12:15:22] <kurushiyama> Especially on a Workstation?
[12:16:15] <ddowlingACT> kurushiyama: Thanks for your time, that seems to have worked!
[12:17:59] <Ben_1> kurushiyama: because this is a development machine and the future production machine should use as less RAM as possible. And as you said, it is not normal that mongod holds gigabytes of RAM minutes after the insertion process
[12:19:13] <Ben_1> that's why I'm searching for a bug in my applications database layer that causes this problem
[12:19:14] <kurushiyama> ddowlingACT: ;) yvw
[12:19:39] <kurushiyama> Ben_1: Wrong approach. I get worried if my RAM utilization drops below 75%
[12:20:12] <kurushiyama> Ben_1: It is because.... of the working set.
[12:20:17] <Ben_1> kurushiyama: what's that for an attitude xD? If this RAM is not really needed it should not be used
[12:20:29] <Ben_1> other applications could use my free memory
[12:20:33] <kurushiyama> Ben_1: Knowing how MongoDB works, it should
[12:20:55] <Ben_1> you said it is not normal for mongodb holding so much memory
[12:21:18] <Ben_1> and if it's not normal there is a bug somewhere
[12:21:23] <kurushiyama> Ben_1: Nope. MongoDB should always run on decicated machines. If you do not adhere to this principle, it is a concious decision, and you need to live with the drawbacks
[12:21:34] <kurushiyama> And I liketo use what I pay for.
[12:22:18] <kurushiyama> Ben_1: If your dimensions are right, the usual utilization of RAM is between 80 and 95%
[12:22:42] <Ben_1> ok example scenario
[12:23:22] <Ben_1> I insert 33million entries ok? afterwards about 10gb are used.
[12:23:40] <kurushiyama> Yes, because of the working set.
[12:23:43] <Ben_1> kurushiyama: how many times I could insert 33 million entries again?
[12:23:50] <Ben_1> the working set?
[12:23:59] <kurushiyama> Ben_1: Documents recently used.
[12:24:39] <kurushiyama> Ben_1: You are trying to optimize a system you do not really understand. My advice is: develop your application, follow MongoDB best practises and let it take care of itself.
[12:25:28] <kurushiyama> Ben_1: I have some collections which contain billions (note the s) of documents. Following your logic, those cant even exist.
[12:26:40] <Ben_1> kurushiyama: that's why I try to unserstand this behavior :P
[12:26:59] <kurushiyama> Ben_1: take a class in MongoU, then. A good starting point.
[12:28:02] <kurushiyama> Ben_1: Advanced deployment and operations is where you really learn it, but I suggest taking MongoDB for DBAs first, and the developer class suiting your programmming language.
[12:49:56] <basiclaser> hi all, `can't create ActualPath object from path` appears when i try to mongorestore a docker mongo instance. any ideas?
[12:50:40] <basiclaser> normally i can run mongorestore at the root directory of the mongo DB dump folder, and it automatically restores from there.
[13:06:03] <DrPeeper> PEW PEW PEW
[13:06:05] <DrPeeper> oops wrong window
[13:37:42] <kurushiyama> basiclaser: version?
[14:33:41] <sedavand> what do they call MongoRegex-class in the new PHP mongodb lib?
[14:55:21] <zell> hmmm, I think while I upgraded my mongodb version I overrided my database
[14:56:22] <kurushiyama> zell: That never happened, and I did a couple of hundred updates so far. What did you exactly do and what are the problems you experience.
[14:56:54] <zell> kurushiyama: I switch my deployment from capistrano to ansible
[14:57:36] <zell> kurushiyama: I'm using https://github.com/UnderGreen/ansible-role-mongodb
[14:57:40] <kurushiyama> zell: Which is not really a MongoDB problem ;)
[14:57:48] <zell> yeah sorry let me explain
[14:58:12] <zell> ansible-role-mongodb added a remote package source (different from my current source)
[14:58:19] <zell> and then proceed to an apt-get install mongodb
[14:58:33] <zell> (upgrading from 2.4.xxx to 2.6.xxx
[14:58:36] <zell> )
[14:58:42] <kurushiyama> Well, most likely it is only the dbpath which differs.
[14:59:02] <zell> kurushiyama: I checked it, and from what I remember db was in /data/db
[14:59:26] <zell> I used locate to try to track another database
[14:59:28] <kurushiyama> check /var/lib/mongodb, just to macke sure.
[14:59:37] <zell> kurushiyama: thx, gonna check it
[15:00:12] <zell> seem that you're right and that you saved my day xD
[15:00:17] <kurushiyama> zell: ;)
[15:00:29] <zell> kurushiyama: I ow you a beer :)
[15:00:45] <Ben_1> in my case the database location is /var/lib/mongo
[15:00:51] <Ben_1> :P
[15:01:39] <kurushiyama> zell: Nope. Contribute to a project of choice. If you can not, make a donation to a charitable organisation of choice. If you can't: np. yvw.
[15:04:56] <zell> Well I made a typo in my locate
[15:05:05] <zell> which explain why It didn't work ^^
[15:05:11] <zell> "panic locate"
[15:11:42] <zell> My experience was hontodo kurushi(mimasu)
[15:12:01] <zell> (not sure if joke is working)
[15:12:18] <cheeser> not so far :)
[15:13:03] <zell> I meant almost painfull :P, but my japanese isn't really good
[15:37:42] <kurushiyama> zell: My Japanese is non existant, except for a few words and a rough translation of my Nick. ;)
[16:10:06] <nils_> hi, I'm wondering if I can store default credentials for the MongoDB client in a . file somewhere.
[16:16:29] <cheeser> sure. it's your app.
[16:16:35] <cheeser> mongo won't read them but your app can
[16:18:19] <nils_> cheeser, I mean the mongo shell specifically
[16:18:29] <nils_> poor choice of words
[16:20:04] <nils_> kinda like .my.cnf in mysql
[16:20:59] <kurushiyama> nils_: Sure. In a script. https://gist.github.com/mwmahlberg/dc128d557c0c3c13a0ec
[16:21:28] <kurushiyama> nils_: very crude version, but works
[16:22:07] <nils_> yeah it's not really ideal, probably best to leave the burden of authenticating up to the admin then.
[16:23:28] <kurushiyama> nils_: either way, you store the credentials somewhere. Not much difference to a dot file
[16:27:33] <nils_> yeah, I could of course then alias mongo to that script or something, however that would then potentially leave the credentials visible in ps...
[16:38:24] <kurushiyama> nils_: Nope:
[16:38:26] <kurushiyama> 39844 s004 S+ 0:00.13 mongo localhost:30000/fooDb -u superuser -p xxxxxxxx --authenticationDatabase admin
[16:38:39] <kurushiyama> Copied and pasted
[16:38:45] <kurushiyama> from my ps
[16:39:39] <kurushiyama> Where "xxxxxxxx" is NOT my pw
[17:48:22] <driti> hey @dkuebric
[17:52:39] <dkuebric> hey driti
[17:53:10] <dkuebric> good talk
[17:55:20] <Ben_1> is it possible to reference document1 in document2 without nesting documents?
[17:55:44] <StephenLynx> what kind of reference?
[17:55:48] <rom1504> don't do it
[17:56:05] <rom1504> if you want to reference stuff, use sql
[17:56:14] <StephenLynx> there are valid use cases for references in mongo.
[17:56:21] <StephenLynx> as long as you don't abuse them.
[17:57:02] <StephenLynx> its a delicate path to tread, but a valid one nonetheless.
[17:57:08] <cheeser> agreed
[18:25:59] <Ben_1> StephenLynx: I have entries for devices with attributes like IP, Serial, label etc (about 5000 device entries). Seperated from them I have millions of sensor entries and I want to associate this entries to their related device. Is this reasonable in mongodb?
[18:26:17] <StephenLynx> what kind of relation is that?
[18:33:29] <Ben_1> StephenLynx: dunno some kind of storing the objectID of that device I think
[19:05:44] <dlam> hey how would you go about "make mongodb not use a lot of memory?" (im provisioning a server and know almost nothing about mongo)
[19:06:06] <dlam> maybe theres like a .conf file or --max-memory argument or something
[19:06:16] <StephenLynx> afaik you can't.
[19:06:25] <StephenLynx> cheeser can you confirm or deny that?
[19:06:55] <StephenLynx> that is, aside from having less data or indexes, of course
[19:06:56] <dlam> ohh ok, if it starts out not using a lot of memory that's fine (i just want to make it "not use a lot)
[19:07:33] <StephenLynx> usually having too many collections or indexes will make it too much RAM, I guess.
[19:11:28] <dlam> ooo ok ok, it has like a 5k JSON file in it right now of just users and their emails, im guessing i dont need to do anything then)
[19:12:42] <cheeser> it tries to keep the working set in memory. https://docs.mongodb.org/v3.2/core/wiredtiger/#memory-use
[21:40:21] <dlam> hey is there a good way from a shell script, to check if ``db.accounts.find()`` returns no results?
[21:40:56] <cheeser> .count() != 0
[21:41:01] <Boomtime> db.accounts.count()
[21:41:05] <Boomtime> heh, snap
[21:41:30] <Boomtime> .find().count() works fine as cheeser said too
[21:41:31] <dlam> oh yeah but im in the BASH shell
[21:41:37] <dlam> like not the 'mongo' shell
[21:41:48] <dlam> wait i think i found it, you can do `mongo --eval`
[21:41:50] <Boomtime> so invoke the mongo shell and --eval on the command-line
[21:41:59] <Boomtime> :D
[21:49:21] <cheeser> echo "db.accounts.find({...}).count()" | mongo --quiet somedb
[21:55:46] <dlam> ooo cool, i wound up with... mongo --quiet --eval "db.accounts.find().count() === 0"
[22:02:36] <Logicgate> hey guys
[22:02:55] <Logicgate> I'm trying to build a query to fetch basically only if the time between two dates is greater than a certain amount of seconds
[22:03:02] <Logicgate> what's the most gracious way to do this?
[22:21:48] <synapse> I need a dump of a db collection to import to another mongo db, do I take the file ending in 0 or .ns?
[22:22:45] <Boomtime> wat? why are you looking at the db files? use mongodump, be safe
[22:23:04] <synapse> oh ok
[22:27:13] <synapse> nice and easu
[22:27:15] <synapse> easy
[22:29:14] <kurushiyama> I have a funky problem. On the shell, not a single collection of a database is shown any more. But: the data is acessible
[22:29:57] <kurushiyama> Engine is WT, version 3.0.9
[22:30:18] <StephenLynx> you are on the wrong database
[22:30:34] <synapse> Boomtime I put the db from mongodump into the db folder but when i log in to the shell it's not there is there something else I need to do to make mongo see the dbs?
[22:31:01] <Boomtime> mongorestore
[22:31:06] <synapse> thanks
[22:31:13] <Boomtime> it is the opposite of mongodump
[22:32:00] <synapse> I'm importing it to a windows mongo
[22:32:23] <Boomtime> given that the storage engine can change it is not a good idea to depend on the dbpath files - mongodump/mongorestore is dependable
[22:32:48] <synapse> meaning it wont work due to path incompatibilities?
[22:34:52] <synapse> mongoexport/mongoimport might be better
[22:34:56] <synapse> I'll try that
[22:35:57] <StephenLynx> nope
[22:35:58] <StephenLynx> they are not.
[22:36:38] <StephenLynx> dump/restore are the best tool for backing and restoring data
[22:37:24] <synapse> why StephenLynx?
[22:37:36] <StephenLynx> because dump actually outputs bson
[22:37:53] <StephenLynx> so it is able to output formats that are not possible for export to.
[22:39:14] <kurushiyama> StephenLynx: I am certainly not. And the collecions arent shown in compass either. And in the wrong database, the queries work reragdless of the fact that the collections arent shown
[22:39:33] <kurushiyama> "wrong" database, of course
[22:39:57] <StephenLynx> show collection doesn`t give you anything?
[22:40:02] <StephenLynx> collections*
[22:40:03] <kurushiyama> StephenLynx: Aye
[22:40:11] <StephenLynx> and how you know you can access it?
[22:40:18] <kurushiyama> nor does .getCollectionNames(9
[22:40:35] <kurushiyama> StephenLynx: Since I do a db.mycoll.findOne() and it returns data
[22:40:40] <kurushiyama> Expected data
[22:40:43] <StephenLynx> eh
[22:40:45] <StephenLynx> for real?
[22:40:54] <StephenLynx> this is weird.
[22:41:05] <kurushiyama> StephenLynx: No, just kidding because I like working at 11pm ;)
[22:41:15] <kurushiyama> StephenLynx: Yes, for real
[22:41:34] <StephenLynx> what version you using?
[22:41:51] <kurushiyama> StephenLynx: 3.0.9
[22:42:55] <StephenLynx> either you got one hell of a bug there or you missed something.
[22:43:16] <kurushiyama> StephenLynx: Now it gets _really_ strange. I reopened the connection with compass: Collections shown.
[22:43:22] <kurushiyama> Shell? Still nada
[22:43:31] <StephenLynx> given its 23:00 and you are using a somewhat new version, I`d double check everything.
[22:43:40] <Boomtime> kurushiyama: update your shell
[22:43:40] <StephenLynx> tried restarting the shell?
[22:43:48] <StephenLynx> and the server?
[22:43:54] <Boomtime> you are using an old version (probably pre-2.6.9) shell
[22:43:56] <kurushiyama> Yes, multiple times. 3.0.9 shell
[22:44:13] <Boomtime> please type version() at the shell (NOT db.version())
[22:44:22] <kurushiyama> Boomtime: Aye
[22:44:25] <Boomtime> in fact, type both, provide both
[22:44:53] <kurushiyama> Boomtime: Got that
[22:45:00] <kurushiyama> Boomtime: You were right
[22:45:13] <kurushiyama> my shell on my machine is 3.0.9
[22:45:34] <kurushiyama> Old shell on that machine, whcih was what I used to connect in an ssh session: 2.6.5
[22:45:36] <kurushiyama> Phew
[22:45:50] <kurushiyama> Thanks, Boomtime and StephenLynx, I owe you one
[22:47:16] <Boomtime> kurushiyama: beer, i accept beerpal credits.. somebody needs to invent that
[22:54:59] <Tachaikowsky> can I get some help on making this work? https://www.npmjs.com/package/mongoose-friends I am using it on my node.js and keep getting callback not defined
[22:56:20] <StephenLynx> yeah, mongoose is bugged as hell
[22:56:32] <StephenLynx> I try to warn people to avoid it like the plague over #mongodb
[22:56:38] <StephenLynx> oh wait, this is #mongodb
[22:56:52] <Tachaikowsky> LOL, you are funny. But this is just me being ignorant
[22:56:54] <Tachaikowsky> https://www.npmjs.com/package/mongoose-friends
[22:57:12] <Tachaikowsky> Can you please tell me where this goes? User.requestFriend(user1._id, user2._id, callback);
[22:57:29] <StephenLynx> no, for real.
[22:57:32] <StephenLynx> mongoose is awful.
[22:57:44] <StephenLynx> bug ridden, slow and bad documented.
[22:58:13] <Tachaikowsky> StephenLynx, what alternative do you suggest then?
[22:58:20] <StephenLynx> mongodb
[23:03:22] <StephenLynx> https://www.npmjs.com/package/mongodb Tachaikowsky
[23:03:38] <Tachaikowsky> StephenLynx, okay sir, let me take a look
[23:03:40] <StephenLynx> docs: http://mongodb.github.io/node-mongodb-native/2.1/api/
[23:07:28] <cruisibesares> hey does anyone know off hand if you can run a real set and hang slaves off of that set? Im having issues with putting all the nodes in the repl set for several reasons and i would love to be able to have an ha setup in my core and hang readonly slaves off the ends that can come and go without the master knowing
[23:12:57] <cruisibesares> i found this from a while back and it seems like the answer is no
[23:12:57] <cruisibesares> Re: hybrid (replSet + slaves) replication options for mongo ...
[23:13:10] <cruisibesares> sorry that was
[23:13:11] <cruisibesares> https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwj4i5f594TLAhWC2xoKHWm4C4YQFggrMAI&url=http%3A%2F%2Fqnalist.com%2Fquestions%2F5233122%2Fhybrid-replset-slaves-replication-options-for-mongo&usg=AFQjCNGRWOHnYdmGnquQyXt8F58jFaZ8TA&sig2=yZJJaYRgBnX5fK49fhs1BQ
[23:31:30] <synapse> is anyone here familiar with the example restaurants dataset supplied on the mongo intro page/
[23:31:46] <synapse> Well as you will know it's very large
[23:33:55] <synapse> I have an AJAX and a query using regex which searches a field using a "contains" pattern match (so is quite taxing) In fact if I do more than 4 queries rapidly it kills the entire server. The results are enormous as a query occurs on each keyup. Besides from some client side limiting, what should be done server side? and is such a thing normal?
[23:35:33] <synapse> I know it's friday night and most normal people are out enjoying life and I'm sat here breaking my mongo server
[23:35:46] <synapse> But there must be at least one person as sad as me?
[23:35:49] <synapse> :)