[00:27:57] <pylua> I have remove half of the data ,but why does the disk availabe space not increase?
[00:30:51] <cheeser> mongodb doesn't release space back to the OS until you do a repair. at least with mmapv1. wiredtiger is better about that, iirc.
[00:39:05] <pylua> cheeser:if I have removed the data in mongodb ,then when the disk space is not enough ,would mongodb reclaim the unuseful space ?
[00:42:25] <cheeser> yes, internally mongodb will reuse the space
[00:45:50] <pylua> so i do not need to be worry about the disk space not enough after remove part of the data?
[00:46:39] <cheeser> well, you can worry less for sure. :)
[00:53:55] <pylua> cheeser:my problem is the disk space can not be repaired because the repairDatabse() can not be executed when disk space is not enough
[01:00:55] <cheeser> pylua: yes. which is why i said "worry less" :)
[06:01:58] <m3t4lukas> in the java driver: should I rather use com.mongodb.client.MongoDatabase or com.mongodb.DB?
[06:08:03] <m3t4lukas> I mean, only DB seems to be working, but it seems a lot like unclean code to me :/
[06:38:46] <Jonno_FTW> hi, is mongo the right tool for my job? I need to store ~2TB of documents that have time,location, and an array of objects with ~8 elements and 2 fields. The data won't be updated, just inserted into and queried a lot, more queries than inserts by about a 3 times
[06:39:30] <Jonno_FTW> 2TB is what I have now, but it will be streamed in the future, so maybe up to 10TB on a single machine
[08:47:32] <pdekker> I think my problems are caused by the keyfile. Is there a way to setup a secure replica set without using a keyfile?
[08:56:56] <joannac> pdekker: why do you think it's the keyfile?
[08:57:47] <pdekker> joannac: The problem is the following. If I have an empty db without users, I can use the keyfile option and issue rs.status() or rs.initiate() without problem
[08:58:21] <pdekker> However, if I have a database with users, and then add the replSet and keyfile options, it is not possible to issue rs.status() or rs.initiate() anymore. They give an error "not authorized as admin"
[08:58:45] <pdekker> When I then remove the keyfile option again (but leave replSet), it is again possible to issue rs.status() or initiate
[09:00:47] <pdekker> I want to replicate this existing database to a new, empty database, so they become a synced replica set. I do however not want other people to access the data in my existing database by creating a replica set with it
[09:01:46] <joannac> you can achieve that with firewalls, rather than authentication
[09:02:03] <joannac> although you should possibly do both
[09:02:15] <joannac> but once you turn authentication on, then... authentication is on
[09:02:28] <joannac> that means anyone who connects has to authenticate to do anything
[09:03:42] <pdekker> It seems the existing users do not combine with the keyfile. Is it possible to do replica set authentication based on the users, instead of the keyfile?
[09:04:05] <joannac> if you already had users, you can still use them
[09:06:34] <joannac> if you have --keyFile, whatever users you have are still usable
[09:06:41] <joannac> but you have to authenticate first
[09:06:46] <pdekker> Is this a gap that could be filled up by using a firewall? Only allowing connections from the other members of the replica set, but not other?
[09:06:51] <joannac> once you authenticate, you then have access
[09:07:10] <pdekker> I know the users are still available, I can use them to log in via mongo -u
[09:07:27] <pdekker> However, the rs.() commands cannot be executed, whichever user I use
[09:07:41] <joannac> then your users don't have the right roles
[09:07:41] <pdekker> they say "not authorized on admin to executed command ..."
[09:13:15] <pdekker> joannac: That sounds good, thanks a lot, I'm gonna try it
[10:14:04] <sarathms> I see some unexpected behavior with aggregation using the $geoNear operator. There seems to be a limit of 100 items in the result set. I've isolated the problem in this piece of code: https://github.com/sarathms/mongodb-geo-aggregate-test. Am I missing something?
[10:16:43] <sarathms> from location A I'm able to find a doc at B say 20km away. When I added about 100 docs around or at A and repeat the same search, I don't find B anymore.
[10:18:04] <sarathms> works until I add 99 docs around A. From 100 and above, I get only the 100 nearest items in the results. There's no $limit stage in the aggregation pipeline.
[11:00:23] <joannac> sarathms: yes, the part where it says the limit is 100 in the docs ;) http://docs.mongodb.org/manual/reference/operator/aggregation/geoNear/
[11:01:17] <sarathms> yeah, I saw that a while ago. That seems to be the only place its mentioned.
[11:02:32] <sarathms> nodejs native driver documentation shows a different default value. I happened to see that first :| https://mongodb.github.io/node-mongodb-native/2.0/api/Collection.html#geoNear
[11:04:17] <joannac> sarathms: hrm, I think those docs should say the same thing, i.e. that the default for "num" is 100
[11:05:45] <joannac> open a ticket at https://jira.mongodb.org/browse/NODE maybe?
[11:26:15] <pchoo> Derick: to reduce client side munging in my meteor app. I wasn't sure if it was possible, but thought I'd ask anyway, thank you :)
[12:15:36] <jamieshepherd> If I have a collection of reviews, I'd like the ability to be able to give a review a "thumbs up", obviously I only want to be able to give one per user account. Should I store this in the users collection, and then just store a total on the review? Not sure the best practice here.
[12:17:08] <jamieshepherd> Or alternatively just have a "likes" collection
[12:48:12] <aps> Do I need to restart mongod after config change? Is there a way to reload the config file without restart?
[12:48:33] <aps> P.S.: I'm runing mongo as a service with config file already specified
[13:00:02] <deathanchor> aps, config changes are good for restarts, but most config settings can be done as mongo commands in the shell
[13:00:19] <deathanchor> exceptions that I remember are auth, replSet, things like that
[13:51:09] <StephenLynx> I don't run scripts on shell often
[13:52:01] <symbol> hmm I suppose it's because BSON stores dates in the ISO byte format so the wrapper makes sense.
[14:07:14] <JoshK_> Hello. I have a document that looks like http://www.hastebin.com/irevidimoq.coffee and I would like to insert something into the "logs" object using the java driver - how would I do this?
[14:08:09] <MANCHUCK> not sure how it is done with the java driver
[14:09:16] <MANCHUCK> in mongo it is db.coffee.update({_id: "foobar", {"$push": {logs: {bar: "bat"}}})
[14:09:28] <pdekker> I have two db's in a replica set which connected well before. I changed the setup of the db which should be secondary: I started running it as a service. Now, the "primary" db can still find it, but denies permission
[14:17:04] <lqez> JoshK_ - what a cherry picker kk he/she already left lol
[14:23:07] <leandroa> Hi, I have a doc like: {date: ISODate(), hours: {"0": 123, "1": 22, ..., "23": 21}.. how can I aggregate to return: {date: ISODate(), hour: "2, value: sum_of_all_values_in_same_date_and_hour} ?
[14:23:26] <lqez> pdekker: are you running mongodb on windows ? or linux ?
[14:23:53] <crised> About DB Engines, If I have cronological ordered data
[14:24:03] <crised> Is a Key, Value store good for this_
[14:27:53] <crised> lqez: Which would be the second candidate for influxdb?
[14:28:00] <lqez> pdekker: did you run it as 'root' before?
[14:28:55] <lqez> I've used mongodb for inserting-only database very well - but it's up to your data
[14:29:23] <crised> lqez: model it's a timestamp, a string, and text
[14:29:25] <pdekker> lqez: Before, I ran it as local user, with some local directories. Now, I used the CentOS init script and it runs as the mongod user
[14:33:15] <pdekker> lqez: /var/lib/mongo, the standard which is mentioned in the init script. It is owned by mongod. Mongo does not give an error about the data dir. It even connects to the "primary" (actually not yet primary) db, where it is mentioned as STARTUP
[14:33:42] <pdekker> However, the primary says: "replSet couldn't elect self, only received 1 votes"
[14:34:05] <pdekker> And the secondary says: Failed to connect, reason: errno:13 Permission denied
[14:34:22] <pdekker> So some kind of connection is made, but it is cancelled
[15:04:24] <sachiman> i am looking for some node version.. specifically loopback kind of soln
[15:06:11] <StephenLynx> I am working with node (now io, soon node again) with mongo for a year now.
[15:06:24] <StephenLynx> that is not something I would personally use.
[15:06:43] <pdekker> lqez: Do you have any thoughts? When I change it back to my local version of mongod on the secondary, it works again
[15:07:16] <StephenLynx> ever example of framework of sorts I saw for this environment is nothing but bloated trash that do nothing but hog performance for no good reason.
[15:07:38] <StephenLynx> especially when it comes to things that handle the database.
[15:08:10] <lqez> pdekker: what is the difference between two setups?
[15:09:03] <lqez> It looks like primary(yes, not yet) works fine, but secondary didn't setup properly
[15:09:42] <pdekker> lqez: One runs mongod as a local user, with a local data dir and log. The other is an init script, started using "service mongo start". The config is almost the same, only different dirs and the fork option
[15:11:19] <lqez> pdekker: `Failed to connect to ip-address:port` -> did ip-address:port mean primary's ?
[15:12:02] <lqez> could be paste actual log of it (+more lines)?
[15:16:51] <mskalick> hi, rs.initiate prints "Config now saved locally. Should come online in about a minute." . How to find out that replication is ready? thanks...
[15:21:43] <deathanchor> anyone know how to aggregate $sum { "key" : { "a" : 9, "b" : 7 ...} } not knowing how many a,b,c,d etc.
[15:22:05] <pdekker> lqez: yes that is the address of the primary. I pasted it here: http://pastebin.com/MnPMEcpa
[15:23:10] <pdekker> mskalick: you can do rs.status() afterwards to check. Also, if everything worked out, on the mongo shell's of the replica members you should see PRIMARY or SECONDARY in front of every line
[15:51:33] <cheeser> but your "example" query is not how mongo queries work.
[15:54:53] <StephenLynx> it is pretty standard give an example using the terminal syntax so you can then use it as a reference and use with your driver.
[15:55:40] <pdekker> lqez: It listened on 0.0.0:1234 (because the bindIp option was off). I tried setting the bindip to the external ip address of the primary, but still does not work
[15:56:10] <pdekker> I would say the problem is on the secondary...but I have to keep all options open
[16:05:40] <StephenLynx> afaik, $in will just compare the value, it will not evaluate the regex.
[16:05:57] <StephenLynx> from what I know, $in is just "see if the value equals to any of these"
[16:06:16] <StephenLynx> $or is "check if any of these conditions are true after evaluating them"
[16:10:40] <dj3000> StephenLynx: when I translate that into java code, it doesn't work.
[16:11:18] <StephenLynx> try with the terminal, check the driver documentation, translate it.
[16:43:37] <blizzow> I'm getting the WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always' warning. I followed the docs and put in the transparent_hugepage=never option for grub, still got the warning. So I did the secondary way and put options in rc.local and continue to get the warning after a reboot. Is there a boot option that I can throw in for the defrag or a way to force rc.local to be run before mongod starts?
[16:49:06] <Doyle> Hi folks. Readahead question. Setting readahead of 32 is recommended, or at least "found to work well". When setting the readahead for an LVM, shoudl the RA for the devices be adjsuted to match? I believe it shouldn't matter since it's the kernel that's accessing the devices, and the app goes through the LVM.
[16:49:47] <Doyle> I think it doesn't... but it's one of those things that i feel like I should verify.
[17:16:04] <eggsbinladen> Anyone here in charge of mongo docs?
[17:39:23] <Doyle> I have an issue with MongoDB on CentOS7. After boot mongo reports "WARNING: Readahead for /data is set to 4096KB" despite the readahead being set in the lv config as indicated by lvdisplay and specified with lvchange. If service mongod restart, then it's fine. It seems like the mongod process is launching before the lvm configs are in place... somehow...
[17:39:53] <Doyle> I'm considering putting blockdev --setra 32 ... directly into the init script.
[17:40:03] <Doyle> Unless someone has a better suggestion.
[17:43:19] <eggsbinladen> The mongo init scripts are woefully inadequate. A) I'm amazed that no mongos init script exists considering a major benefit is sharding and requires mongos processes. B) it's appalling the defaults for readahead, transparent_hugepage and transparent_hugepage/defrag are not set.
[17:45:20] <Doyle> I scripted my mongod build process. The section after the mongod installation is Tuning. Sets the limits up for the monogd user, disables THP, Sets the TCPKeepAlive to 120, and it sets the LVM ra to 32.
[17:46:00] <Doyle> But mongod inits before lvm... somehow... not sure how lvm launches in the boot process
[17:49:24] <ehershey> what I want to do to put into the official packages is create new init scripts that will run before the mongod init scripts and do that tuning
[17:49:56] <ehershey> because the other reason it's not in the official packages is changing those settings is probably a bad thing for any software package to do
[17:49:59] <Doyle> I was going to put this in at the top of the monogd init. for dev in (dm-0,xvdb,xvdc,xvdd,xvde,xvdf,xvdg,xvdh,xvdi) ; do RA="$(blockdev --report | grep $dev | awk '{print $2}')" ; if [ $RA -eq 32 ] ; then : ; else blockdev --setra 32 /dev/$dev ; fi ; done
[17:50:04] <ehershey> so it should be turned off unless you explicitly enable it
[17:51:28] <Doyle> I think it'll only complain for the dm-0, so I could put just that in since the devices will catch up when the lvmetad hits.
[17:53:12] <Doyle> Going to put this in at the top of the start() function. RA="$(blockdev --report | grep dm-0 | awk '{print $2}')" ; if [ $RA -eq 32 ] ; then : ; else blockdev --setra 32 /dev/dm-0
[17:53:28] <Doyle> See if it complains after boot.
[17:53:48] <ehershey> I don't think that's a bad idea
[18:08:26] <dadada> would it be possible to use mongodb as a "replacement" for a normal filesystem, I mean, let's say I want to create my own innovative and unique linux distribution, and I want files to be treated very differently from the "norm", I want to treat everything like a huge database, put files into it, put mails into it, put configs into it, allow tags for everything and so on
[18:09:00] <dadada> would mongodb scale in such a way that it's usable for the kinds of loads that filesystems need to handle
[18:09:19] <Doyle> Yea, putting this at the top of the start() function of the mongod init script fixed my issue.
[18:16:07] <dadada> that looks like a good start for what I want ^^
[18:46:52] <kaseano> hi, I'm using mongo with node, and I have to run two unrelated queries one after the other. Should I open two connections so that they can run simultaneously/asynchronously? Or will they do that on one connection? I tried two connections and it still seems to finish the first query before running the second which is weird.
[18:52:10] <StephenLynx> the driver uses a connection pool.
[18:52:42] <StephenLynx> so you don't need to open a second connection if you wish to run these two in parallel.
[18:53:06] <StephenLynx> btw, from what I know, mongo has an internal queue for queries.
[18:53:30] <StephenLynx> but I am not certain on that the later
[19:00:22] <kaseano> no not really lol just trying to build good habits. If they can be run in parallel I should try to set it up so they will be
[19:00:55] <StephenLynx> a good habit with mongo is to not try to deviate too much from the defaults.
[19:01:07] <kaseano> theoretically in case I ever did have a raid 10 mongo farm with multiple disks/processors
[19:33:08] <m_e> is there some kind of form builder for mongodb? basically i want to define which fields i want and it generates the form and connects any changes to the DB.
[21:45:31] <jamiel> Anyone got any idea how this formula works? http://docs.mongodb.org/manual/reference/limits/#Sharding-Existing-Collection-Data-Size
[21:45:42] <jamiel> To calculate if it's "too late"
[22:51:20] <shortdudey123> anyone seen mongo complain about perms on a keyfile when run under systemd but works fine when run via a tty
[23:28:15] <stuntmachine> Does anyone have a sensible starting configuration to use in /etc/mongod.conf for a new Mongo 3.0.4 cluster? I know it creates a default one there but I want to make sure I'm using the best options, and I'm fairly unfamiliar w/ Mongo administration.
[23:41:08] <cheeser> well, the defaults are what they are for a reason. they're a reasonable approximation of the general use case.
[23:41:14] <shortdudey123> stuntmachine: probably good to skim this over so you know what it all does though, http://docs.mongodb.org/manual/reference/configuration-options/