[03:20:49] <symbol> When I call db.collection.find() and am iterating over a cursor...where are the documents stored that I haven't iterated over yet? Are they still in memory in MongoDB?
[05:36:06] <sputnik13> quick question, if I'm using a 32 bit client to connect to a 64 bit server am I still limited to 2GB?
[05:48:05] <Boomtime> sputnik13: not on the server
[05:49:01] <sputnik13> Boomtime: right, so even using a 32 bit client from say an embedded arm device I can push data to and address data in databases greater than 2 GB in size as long as the server is 64 bit, correct?
[05:49:42] <sputnik13> I was confused because even connecting to a 64 bit server, the client throws a warning about a 2GB limit because the client is 32 bit
[05:51:00] <Boomtime> what is the warning? can you paste the whole line here?
[05:51:14] <Boomtime> also, what OS is the server?
[11:21:46] <adsf> is it possible to get the state of a connection in mongo 3 java driver?
[11:22:05] <adsf> really struggling with in relation to quartz
[11:22:15] <adsf> if i close a connection at the end of a job other jobs seem to be impacted
[12:10:18] <jamieshepherd> I have FILMS, and I have REVIEWS - Should I make reviews part of my films collection (e.g. film { reviews [ ] }, film { reviews [ ] } or its own collection which is related to films?
[12:10:26] <jamieshepherd> Not sure about best practices here
[12:33:59] <deathanchor> depends on what your user stories are.
[12:39:28] <tejasmanohar_> how do i generate mongodb sslCert
[12:39:50] <tejasmanohar_> im trying to connnect w/ an ssl cert to compose.io via mongoose... i have sslKey- it's their RSA public key file, i suppose?
[13:07:03] <tejasmanohar_> my mongodb managed host, compose, gave me an ssl public key but no cert.. can i generate this based on the key?
[13:11:24] <grkblood13> is there a way to use mongoimport to import a json file from mongoexport that has had the _id fields removed and have it only import entries if it's not a duplicate of an entry that already exists?
[13:11:30] <tejasmanohar_> https://www.compose.io/articles/going-ssl-with-compose-mongodb-plus/ compose.io is saying all you need to connect is -sslCAFile blah.pem in the command
[13:11:57] <tejasmanohar_> w/ the ssl pub key starting w/ `-----BEGIN CERTIFICATE-----` in the pem file
[13:12:26] <tejasmanohar_> but mongoose connect dboptions are sslCert and sslKey
[13:12:54] <tejasmanohar_> which one is this? the compose.io dashboard says "ssl public key" and the file contents say "-----BEGIN CERTIFICATE-----" so its hard to say
[13:12:59] <tejasmanohar_> and do i need the other value?
[13:34:44] <trupheenix> Is it possible to do a multikey 2dsphere index?
[13:34:59] <trupheenix> and also be able to run a geoNear aggregation on such a collection?
[15:31:39] <ito> hi. are clients without the --ssl option set able to connect to a mongo server once ssl is turned on on the mongo server?
[15:35:17] <ito> so once ssl is enabled it's mandatory for all clients. alright... thanks for the info StephenLynx, Derick!
[16:19:42] <symbol> Considering the dynamic nature of collection...would it be bad schema design to have a users collection with some users having a field admin set to true and then just leaving the admin field out of normal users?
[16:20:14] <symbol> I suppose that wouldn't be very explicit and could lead to confusion.
[16:26:25] <EXetoC> I try to not exclude fields conditionally
[16:27:17] <EXetoC> and querying for false is a little nicer than {$ne: true}
[18:21:57] <schuranator> I have a question about authentication. I created a new user and password but when I enable authentication in the config and restart the user can't login
[18:26:31] <cheeser> you might need to specify the authentication database.
[18:28:29] <schuranator> So my goal is pretty basic. I want to be able to remote into my mongodb using robomongo that needs a login and password. But it seems like when I enable authentication providing the username and password fails.
[18:58:59] <akoustik> can i set my replica read preference in my mongod.conf, or does it have to be set by clients?
[19:01:17] <cheeser> servers don't have read preferences so you need to configure your clients
[19:02:21] <akoustik> oh yeah, i guess it doesn't even make sense to have the hosts specify read prefs themselves now that i think about it. thanks.
[19:19:09] <MacWinne_> is this a bug in mongo GridStore docs for the unlink method: https://mongodb.github.io/node-mongodb-native/api-generated/gridstore.html
[19:19:22] <MacWinne_> it's saying to open the file with the 'r' flag
[19:19:48] <MacWinne_> does this need to be the 'w' flag for being able to delete?
[19:24:53] <StephenLynx> let me see what I do when I delete stuff from mongo
[19:55:09] <m4k> Is it good idea to keep a master collections ? For e.g. I have a media and there is different types of it like, for front, gallery, pdf, video etc. Do I create reference master collection for media type?
[19:55:58] <StephenLynx> can't you just have a field with a string that says the type?
[19:56:04] <StephenLynx> do you need metadata on the type?
[20:06:56] <m4k> This is for media, I have another field called Amenities : it got parents like Out door/indoor/flooring etc for them we need to have parent child relationship.
[20:40:58] <Eloquence> Hi. Trying to get mongo working on Ubuntu 15.04, reinstalled packages to 2.6.3 with --reinstall, service always fails to start with "status=4/NOPERMISSION" in syslog - any ideas?
[20:42:29] <StephenLynx> are you installing from 10gen repositories?
[20:43:35] <Eloquence> nope, I think these are the default vivid packages
[20:48:33] <akoustik> Eloquence: so yeah, how important is 15.04 to you? heh
[20:49:08] <akoustik> if the answer is "very" then you can choose to be a cowboy and try to use the package anyway, dunno how that works.
[20:49:59] <akoustik> but with this NOPERMISSION thing, it might be an issue of file permissions/ownership in directories used by mongo. say, maybe /var/log, /var/lib/mongo, /var/run/mongodb or something like that.
[20:51:17] <Eloquence> *nod* will poke a bit more. the recommended workaround is to use debian wheezy debs, which I might try
[21:01:14] <Eloquence> OK, in case anyone runs into the same thing -- needed to upgrade with mongod --upgrade & fix permission errors tracked down via mongodb logs. seems fixed now but I take the point that 2.6.3 is ancient.
[21:26:31] <akoustik> so i'm curious. when my replica set is in a "bad" state, say maybe some hosts are down or something, i see the running mongod instances spitting out huge numbers of attempted connections, to the remote host as well as locally. should i be concerned? would it be smart to reduce the rate of connection attempts? does it matter?
[21:45:51] <fejes> Hello: I have a basic question. I’m doing a query against a large collection, and I need to do a query to find out if a given record exists ({_id:{$in:[a,b,c,d,e….]}}, {_id:1})
[21:46:13] <fejes> Is there a more efficient way to do that query?
[21:46:37] <fejes> I’m currently running with batches of 5000 at a time, but I could change that.
[21:47:29] <fejes> I also see that mongo is reading records from the disk, despite the fact that all I need to touch is the _id field, so it should be a covered query, though it doesn’t seem to be acting that way.
[21:47:42] <fejes> anyone have any insight they can share?
[21:50:47] <akoustik> so your array your checking membership of - that has 5000 elements?
[21:53:15] <fejes> I can make it smaller, but I don’t expect checking 1-by-1 to be any faster
[21:53:57] <fejes> 5000 is probably too much, but is there a better way to do it that uses the idhack plan without doing it one-by-one?
[21:54:37] <akoustik> nah, i mean that's what you gotta do if that's what you gotta do. i would try to avoid it though. for example, if it were an array of numbers, i would see if i could use numerical range testing or comparison.
[21:57:43] <fejes> you’d think this would be fast and easy,
[21:58:08] <akoustik> so it depends on the datatype you're using as an ID. in general, i don't think there's a better method than just using $in and maybe trying to make sure your array is ordered. maybe somebody else has some wisdom. i'm a noob.
[21:58:08] <fejes> however, I keep seeing that mongo is reading from disk, and returning a much higher reslen than I expect,
[22:00:17] <fejes> It looks like a 20%-40% nscannedObjects ratio.
[22:00:37] <fejes> well, maybe I can help you out, if you have any issues.
[22:00:51] <fejes> This one, though, is stumping me because it’s so simple, and yet not performing as expected.
[22:01:41] <akoustik> heh, thanks. i have my stuff "working" at least, and right now i'm just concerned with getting it out the door. i just lurk and see if i can learn or help.
[22:05:17] <fejes> which means that mongo is fucking something up spectacularly.
[22:05:46] <akoustik> have you played with querying fields besides _id?
[22:06:12] <fejes> not for this particular context.
[22:25:03] <kryo_> i have a collection with a unique integer field called "num" with the requirement that no 4 elements will ever have adjacent "num" fields (i.e. [1,3,4,5[1,2,3,4])
[22:25:24] <kryo_> (i.e. [1,3,4,5] is ok but not [1,2,3,4])
[22:25:43] <kryo_> sorry, pressed enter accidentally while typing :P... what's the best way to enforce this requirement?
[22:26:24] <fejes> in code before you send it to the db.
[22:27:15] <kryo_> right, but the only way i can think of is to execute 4 different "count" queries
[22:27:29] <kryo_> is there a way to do it in one query?
[23:34:50] <Doyle> What happens when the mongod.log volume hits 100%? All that's on it is the mongod.log file.
[23:35:05] <Doyle> I emptied it, but it's not showing any activity after that.
[23:46:11] <Doyle> Any assistance would be appreciated