PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 6th of May, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:12:26] <MacWinner> how do you see the actual total space used by documents in collection? show dbs seems to show allocated disk.. but if I delete a bunch of documents, how would I see the difference before and after?
[02:13:04] <MacWinner> just stats() ?
[03:16:31] <MacWinner> as part of my plan to move from a replica set to a sharded cluster, is the first prudent step to introduce the mongos router and point it to the existing replica set? i'm trying to phase in different pieces and let them burn in independently
[03:24:24] <bruceliu> hi
[03:24:39] <bruceliu> i have mongodb question
[03:25:06] <bruceliu> when update MMAPV1Journal: { acquireCount: { w: 31288 }, timeAcquiringMicros: { w: 18472770 } }
[03:25:17] <bruceliu> timeAcquiringMicros is very high
[03:25:51] <bruceliu> does anyone know about his
[04:49:33] <bruceliu> anyone in here
[04:59:33] <joannac> bruceliu: yes. did you have a question?
[05:03:27] <bruceliu> i update my mongodb
[05:03:41] <bruceliu> MMAPV1Journal: { acquireCount: { w: 31288 }, timeAcquiringMicros: { w: 18472770 } }
[05:03:48] <bruceliu> very slowly
[05:04:06] <bruceliu> timeAcquiringMicros seem takes long time
[05:06:30] <joannac> is that a single update or a multi-update?
[05:10:18] <bruceliu> @joannac
[05:14:33] <bruceliu> @joannac like this update({},{$pull:{ group_id: 1001 } },{multi:true})
[05:14:50] <bruceliu> i think it maybe io
[05:14:57] <bruceliu> but can't make sure
[05:37:04] <InAnimaTe> hey all, got a quick question. I want to do an index where for a given key, another key is unique. so for my birthday(event), i can only have one yellow cake(type). Any ideas how to go about this? im guessing a unique compound index?
[05:49:31] <InAnimaTe> yep ok just figured it out
[06:05:34] <joannac> bruceliu: it should tell you how many documents were updated as part of that one command. What's the number?
[06:14:05] <bruceliu> @joannac 30002
[06:14:39] <bruceliu> i also see backgroundFlushing is high
[07:28:39] <fleetfox> I'm trying to restore archive, it just hangs without output
[07:29:00] <fleetfox> https://dpaste.de/eSGJ/raw
[07:29:53] <fleetfox> in strace it's just waking up futexes and doing nothing
[07:36:49] <fleetfox> i see it connection to server but it does nothing
[07:36:55] <fleetfox> how do i debug this
[09:25:18] <nicolas_FR> hi there, I need some advice on mongoose schema VS models (#mongoose is empty), so if anyone knows... (stackoverflow) : http://tinyurl.com/j24dbcw. I have the same problem (sub document CastError). Answer says to export schema and not model, but it doesn't work for me. Could someone explain me how to export the schema like answer says ?
[09:30:26] <Keksike> What might be the easiest way to log and compare all the operations that go into my mongodb before and after a change in my software?
[09:33:01] <Keksike> or not even the easiest way, but the best way :)
[09:33:36] <Keksike> I updated my softwares java-driver from 2.x to 3.0, and something started behaving differently. I want to find out what that something is.
[09:42:15] <aps> One of my replica-set members is stuck in ROLLBACK state with following logs. Any idea what could be wrong?
[09:42:21] <aps> https://www.irccloud.com/pastebin/T8Yj8Bww/
[09:42:45] <aps> this keeps repeating in logs? Any ideas what's wrong here?
[11:30:17] <kbn2> hello guys, can someone help me with this. I need to update objects inside an array of a document, with the document's own _id... e.g. db.c.update({}, {"test.sample_id": _id})
[11:30:42] <kbn2> hello guys, can someone help me with this. I need to update objects inside an array of a document, with the document's own _id... e.g. db.c.update({}, {"test.sample_id": _id})
[11:33:43] <SmitySmiter> hello guys, can someone help me with this. I need to update objects inside an array of a document, with the document's own _id... e.g. db.c.update({}, {"test.sample_id": _id})
[11:34:17] <SmitySmiter> hello
[11:39:14] <SmitySmiter> anyone here?
[11:42:04] <SmitySmiter> test
[11:45:28] <SmitySmiter> hello guys, can someone help me with this. I need to update objects inside an array of a document, with the document's own _id... e.g. db.c.update({}, {"test.sample_id": _id})
[11:53:15] <kurushiyama> SmitySmiter: In case you are asking wether this can be done by query only, the answer is no. But I would be pretty interested in the _reason_ for this rather unusual upadte.
[11:53:51] <SmitySmiter> kurushiyama, I'm doing a migration from an old data structure to new data structure
[11:53:54] <SmitySmiter> hence the requirement
[11:54:16] <SmitySmiter> I'm writing a function based on the answer posted here http://stackoverflow.com/questions/15846376/mongodb-copy-a-field-to-another-collection-with-a-foreign-key
[11:54:19] <SmitySmiter> hopefully that'll work
[11:56:28] <kurushiyama> SmitySmiter: Stennie is extremely knowledgeable, and I have the utmost respect. The mentioning of a foreign key, however, let's me doubt wether it is a good idea. And for data migrations, aggregations are probably the most suitable tool.
[11:56:56] <kurushiyama> SmitySmiter: Maybe I can help you a bit later, off for 45
[11:57:19] <SmitySmiter> kurushiyama, thanks a lot, but I think I figured how I can do this with a function like that, testing on a sample db
[11:57:49] <SmitySmiter> and yea, unfortunately there were some silly decisions taken during building this stuff that brought in some RDBMS stuff to NoSQL
[11:58:12] <Derick> it happens :-/
[12:10:41] <SmitySmiter> db.getCollection("users").find({}).forEach(function(user) {db.getCollection("users").update({_id: user._id}, {$set: {"roles.$.user_id": user._id}}, {multi: rue});});
[12:35:13] <jokke> hi
[12:35:22] <jokke> i have a question about $text searches
[12:36:06] <jokke> i want my search to behave so that it uses the logical AND operator instead of OR
[12:36:52] <jokke> so if i search for 'this is a test' it should only match documents with the words "this" and "test" (is and a are probably ignored due to the stemming)
[13:36:17] <m_e> lets say i have many books with a published date n my database. can i somehow select one random book for each distinct year?
[13:37:23] <eniac_petrov> Hello :)
[13:37:57] <eniac_petrov> Do you know what happened to the Debian/Ubuntu repository?
[13:42:22] <StephenLynx> no, what happened?
[13:42:48] <StephenLynx> you should use the official repositories, anyway.
[13:43:07] <StephenLynx> I am yet to see a distro repository that packages an useful version of mongo.
[13:43:43] <cheeser> yeah. it's pretty simple to add the mongo repo on debian/ubuntu
[13:43:49] <cheeser> instructions on the website
[13:44:57] <StephenLynx> keep in mind if you are using either of debian or ubuntu that their latest versions are not supported
[13:45:04] <StephenLynx> i would suggest using centOS
[13:45:12] <StephenLynx> for your db server at least
[13:45:23] <Derick> StephenLynx: "you should use" or "you should not use"? (By official, I would have understood the Debian "official" repo)
[13:45:38] <StephenLynx> by official I mean mongo's repositories.
[13:45:52] <StephenLynx> from mongo's point of view, debian is a 3rd party
[13:46:01] <cheeser> you can use debian 7 packages on debian 8, fwiw
[13:46:02] <StephenLynx> maintained by lord knows who
[13:46:13] <Derick> but from the OS user, the mongodb repo is 3rd party
[13:46:26] <StephenLynx> v:
[13:46:35] <StephenLynx> potato, pottohoes
[13:53:31] <eniac_petrov> StephenLynx, cheeser, the repository is empty since two days
[13:53:43] <eniac_petrov> http://repo.mongodb.org/apt/ubuntu/
[13:54:13] <kurushiyama> eniac_petrov: http://repo.mongodb.org/apt/ubuntu/dists/trusty/mongodb-org/3.2/multiverse/binary-amd64/
[13:54:26] <kurushiyama> eniac_petrov: Not so empty from my point of view.
[13:54:40] <cheeser> eniac_petrov: http://repo.mongodb.org/apt/ubuntu/dists/precise/mongodb-org/
[13:54:46] <cheeser> yeah. there's stuff there.
[13:55:09] <eniac_petrov> hmmm
[13:55:12] <eniac_petrov> strange
[13:55:16] <StephenLynx> >empty since two days
[13:55:17] <eniac_petrov> I made something..
[13:55:17] <StephenLynx> http://i.imgur.com/olB0hX5.gif
[13:55:20] <eniac_petrov> ok, thanks :D
[13:55:35] <eniac_petrov> hahaha
[13:55:49] <StephenLynx> i don't think I ever used any distro's default repositories.
[13:55:55] <kurushiyama> eniac_petrov: What did you make? What is the problem you experience (aside from using Ubuntu)?
[13:55:59] <StephenLynx> kek
[13:56:57] <eniac_petrov> kurushiyama, my puppet script just broke. I'll see what happened with ssh
[13:59:25] <kurushiyama> Well, no easy answer for that. You have to debug. Too bad. But please be so kind to let us know what happened.
[14:30:51] <eniac_petrov> kurushiyama, The problem is that there are no source packages
[14:30:57] <eniac_petrov> I'll just remove the sources option
[14:30:58] <eniac_petrov> W: Failed to fetch http://repo.mongodb.org/apt/ubuntu/dists/trusty/mongodb-org/3.2/Release Unable to find expected entry 'multiverse/source/SourcesW: Failed to fetch http://repo.mongodb.org/apt/ubuntu/dists/trusty/mongodb-org/3.2/Release Unable to find expected entry 'multiverse/source/Sources
[14:31:19] <kurushiyama> Oh.
[14:31:25] <kurushiyama> Interesting.
[14:31:38] <kurushiyama> eniac_petrov: ^ Thank you for the information!
[14:31:59] <eniac_petrov> :)
[15:04:35] <adnkhu> hi
[15:04:56] <adnkhu> I was just exploring replication using the java driver
[15:05:19] <adnkhu> and i have been having really difficulty to get it to work
[15:05:28] <adnkhu> i have a 3 instance replica set
[15:05:31] <adnkhu> with regular nodes
[15:05:46] <adnkhu> in my java program i am constantly inserting values in the db
[15:06:00] <adnkhu> now if i stop the primary
[15:06:23] <adnkhu> the java program just throws an exception and exits
[15:06:35] <adnkhu> whereas in can see using the mongo shell
[15:06:40] <adnkhu> that an election has taken place
[15:06:46] <adnkhu> and a new primary has been selected
[15:06:58] <adnkhu> i have put it on the forums as well
[15:07:11] <adnkhu> but have not get any response yet
[15:10:44] <kurushiyama> adnkhu: The driver notifies you that there is no primary atm. You still have to handle this situation, for example wait an acceptable time and redo your statement.
[15:13:35] <adnkhu> but when i do rs.stepDown() it waits for a little while and continues inserting
[15:13:41] <adnkhu> without throwing any exceptions
[15:13:58] <adnkhu> shouldn't the driver handle that
[15:14:02] <adnkhu> ?
[15:14:07] <Derick> it can't
[15:14:10] <kurushiyama> adnkhu: Not in my book.
[15:14:16] <Derick> it can't decide whether it should re-run the insert, or not
[15:14:24] <Derick> it might be time sensitive from the app's point of view
[15:14:37] <kurushiyama> Or unwanted at all.
[15:14:55] <adnkhu> I am just trying to understand the difference to the app in case of primary shutdown and rs.stepdown()
[15:15:26] <adnkhu> in case of rs.stepdown() the driver waits for the elections to finish
[15:15:31] <adnkhu> and then continue inserting
[15:15:57] <adnkhu> completely independent of the application
[15:16:32] <adnkhu> where as in case of me stopping the primary process it throws and exception and exits
[15:16:57] <Derick> adnkhu: change of primary means all connections die
[15:17:17] <adnkhu> yes i can see that in the logs
[15:17:43] <adnkhu> but how does the driver continue in case of rs.stepdown()
[15:17:56] <adnkhu> i can share the code if someone can help
[15:18:09] <kurushiyama> adnkhu: rs.stepDown() is meant of (sort of) gracefully make a primary secondary, for example for maintenance work. Shutting down a primary is equivalent to a failover situation.
[15:19:37] <adnkhu> what would i need to do in code in case of hard failure?
[15:19:58] <adnkhu> as i have already provided all the three mongods in the client connection
[15:20:31] <adnkhu> MongoClient client = new MongoClient(Arrays.asList( new ServerAddress("AKhurramL2", 27017), new ServerAddress("AKhurramL2", 27018), new ServerAddress("AKhurramL2", 27019)));
[15:20:50] <cheeser> all on one machine?
[15:20:54] <adnkhu> yes
[15:20:55] <kurushiyama> adnkhu: As said above: catch the exception, and then do what you deem appropriate. Some people simply wait for some time and try to redo the operation, others check the replica set status and act according to the situation, the next might just ignore it.
[15:20:56] <adnkhu> just testing
[15:21:54] <adnkhu> my confusion comes from the mongodb courses
[15:22:00] <adnkhu> as in one of the videos
[15:22:16] <adnkhu> the instructor said
[15:22:19] <adnkhu> something like
[15:22:33] <cheeser> please finish a thought before hitting enter...
[15:23:05] <adnkhu> whats the point of the replication on mongods if the java driver can't see that and can't handle it
[15:23:25] <cheeser> the java driver can and does
[15:23:25] <Derick> it can see it, it just makes you chose what to do about it
[15:23:36] <Derick> it's not only the java driver btw - it's all of them IIRC
[15:23:36] <cheeser> what part do you think it doesn't handle?
[15:25:07] <adnkhu> @cheeser if i have given all three while connecting shouldn't be able see the elections and use the new primary
[15:25:17] <cheeser> it will
[15:26:41] <adnkhu> if i do rs.stepDown() it kinda waits like its buffering some inserts or something and then continues which is what i thought it should do in case of hard failure as well
[15:26:56] <cheeser> what waits?
[15:27:21] <adnkhu> my java program i can't get some logs from it to show here just give me a sec
[15:27:26] <adnkhu> i can
[15:27:54] <kurushiyama> adnkhu: The driver is notified of the immanent step down and the elections in case of an rs.stepDown(), iirc.
[15:28:06] <cheeser> well, the driver will wait for the election to finish and discover the new primary...
[15:29:19] <adnkhu> ok thats what i wanted to know, just so that i understand it correctly, in case of rs.stepDown() java driver is notified and it waits for the election, in case of hard failure i will have to connect to the replica set again as connection is lost
[15:29:32] <cheeser> right
[15:30:01] <adnkhu> thank you for your time
[15:30:55] <adnkhu> just another quick question, if i do rs.stepDown() on primary and then let the program continue for 2 3 mins and then do rs.stepDown() on the new primary as well then again it throws an exception and exits
[15:31:07] <adnkhu> any thoughts on what is happening in this case
[15:33:17] <kurushiyama> cheeser: Do you have to reconnect? Thought you only have to make sure that election is done...
[15:34:25] <cheeser> no, no need to reconnect since the connection is still valid.
[15:37:32] <kurushiyama> cheeser: Just checked. Might have something to do with the fact that I was connecting via mongos and there were multiple redundant application servers involved pulling in messages and saving them to MongoDB. Failing simply did not remoave the message from the queue, hence I never had to check the mongos connection, really.
[15:39:02] <adnkhu> works and the connection remains valid, thanks guys
[16:32:32] <crazyphil> is there any way to get mongos to quiet down the number of messages it generates in /var/log/messages?
[16:33:01] <kurushiyama> crazyphil: If it logs requests, you should have a _very_ close look on them.
[16:35:22] <crazyphil> kurushiyama: here's an example of what I'm seeing: http://pastebin.com/TyjPDM03
[16:36:16] <crazyphil> I'm seeing about 10k messages every 15 minutes from mongos
[16:36:47] <kurushiyama> crazyphil: Use syslog, rotate to your needs.
[16:37:13] <crazyphil> so those are all normal?
[16:39:19] <kurushiyama> crazyphil: well, There seems to be a problem with your cluster balancing... ;)
[16:39:57] <crazyphil> oh great, more crap to fix
[16:40:34] <crazyphil> but all those connection messages - is there a way to stop it from sending those at least?
[16:41:30] <kurushiyama> crazyphil: Not sure. I never bothered, since imho, using logfiles instead of syslog should be punishable by memory-scrambling or insta-segfault anyway.
[16:43:40] <crazyphil> those messages are from /var/log/messages
[16:44:10] <crazyphil> they get shipped via rsyslog into kafka, where logstash pulls, processes and pushes them into elasticsearch
[16:44:35] <crazyphil> so now ES is telling me I have a lot of messages coming from things running mongos
[17:03:42] <kurushiyama> crazyphil: I am not quite sure I understand you. You should be able to ignore unwanted messages in your syslog setup.
[17:07:05] <sumobob> How can I execute a geoJson $near query if the $maxDistance is stored in the model?
[17:22:53] <kurushiyama> sumobob: You want to have your query done based on the _result_?
[17:32:46] <hardwire> any word on a 10gen go lib?
[17:33:22] <kurushiyama> mgo
[17:33:54] <kurushiyama> hardwire: Not by 10gen, but they are using it for the cli tools. Go figure.
[17:34:22] <hardwire> I saw that...
[17:34:27] <hardwire> mgo seems aaight.
[17:34:38] <hardwire> I'll have to give it a spin
[17:34:41] <cheeser> we sponsor that developmeent
[17:34:50] <kurushiyama> hardwire: Allright? As far as I am aware of, it is the fastest available driver.
[17:35:01] <hardwire> yeh.. aaight :)
[17:35:04] <cheeser> in go, at least. ;)
[17:35:23] <cheeser> if you ask Derick, he'll tell you the new php/hhvm driver is the fastest.
[17:35:27] <hardwire> is Dial a common function in other libs? Maybe I've been too pymongo for a while.
[17:35:37] <cheeser> i think it's a Go thing
[17:35:39] <kurushiyama> hardwire: It is a Go idion
[17:35:46] <kurushiyama> s/idion/idiom/
[17:35:47] <hardwire> ah.. aaight.
[17:36:17] <cheeser> because since go the language is a 70s throwback, why not resurrect decades old imagery, too? :D
[17:36:23] <kurushiyama> cheeser: I would not argue against that. But then, I... would not use php.
[17:36:52] <hardwire> Go has been off my radar until recently.
[17:37:14] <kurushiyama> cheeser: Huh? Didn't know that you dislike Go. Personally, it feels like a rather big advancement, compared to some other languages around nowadays.
[17:37:30] <cheeser> go is ok. the tooling around it is terrible and there is no consistent way to build apps.
[17:37:36] <hardwire> agreed
[17:37:45] <hardwire> I'm not a huge fan of the build environment.
[17:37:47] <cheeser> i did a project in Go here.
[17:37:50] <kurushiyama> Disagrees. Submodules do fine.
[17:37:57] <cheeser> changed the build bash to script to use make. :)
[17:38:48] <kurushiyama> (git submodules, that is)
[17:39:59] <cheeser> i'll have to look in to that approach. that is, if i do any more go work.
[17:40:33] <kurushiyama> cheeser: niemeyers gopkg.in approach, combined with git-flow is what I tend to use, however.
[17:40:41] <hardwire> I've gotten pretty comfy with pip. It'll be a while until I feel more comfortable with using git submodules to lock things down.
[17:40:58] <kurushiyama> cheeser: If I do not have any dependecies outside stdlib, that is.
[17:41:18] <kurushiyama> cheeser: You'd put the submodules into the vendor folder, ofc.
[17:42:05] <cheeser> i've never used git modules either.
[17:42:11] <kurushiyama> hardwire: Well, there is godeps as an alternative, for example. Or gopkg.in, if you develop your own libs.
[17:42:21] <cheeser> godeps is the approach we use.
[17:42:34] <kurushiyama> cheeser: I just recently discovered them, myself. Probably I should write something together.
[17:42:48] <hardwire> lol
[17:42:51] <sumobob> kurushiyama: yes :(
[17:43:00] <cheeser> yip yip!
[17:43:06] <sumobob> i am filtering the results right now
[17:43:07] <hardwire> flying bison time?
[17:43:24] <cheeser> i wish (kudos for catching the reference :D _
[17:43:26] <cheeser> )
[17:43:40] <sumobob> but I am not a mongo expert, is there like a subquery or something I could use?
[17:44:02] <kurushiyama> hardwire: Flying bisons? His noodly appendage should make that impossible... ;)
[17:44:09] <tokam> Are the com.mongodb packages supported? I like to know how to wort with the WriteResult Object of a remove call
[17:44:29] <kurushiyama> sumobob: That has nothing to do with MongoDB. You want to do a query based on the result.
[17:45:05] <kurushiyama> sumobob: Like defining query parameters based on documents you want to find with that query.
[17:45:21] <sumobob> yeah impossible then
[17:45:25] <sumobob> thats what i figures
[17:45:26] <sumobob> d
[17:45:37] <kurushiyama> sumobob: Maybe you should explain the actual use case.
[17:46:05] <sumobob> basically you enter the distance you are willing to travel, when someone looks for people i needs to bring in anyone who will travel to their location
[17:46:14] <sumobob> I've got it all in there as geoJSON point
[17:47:03] <kurushiyama> sumobob: Usually, you have a _known_ distance. Let's say "find all Hardee's in a radius of 20k miles" (which is the distance I am starting to accept to get a Monster Thickburger).
[17:47:39] <sumobob> gotcha, when you signup you select your travel range from an array of 5, 10, 20, 30
[17:48:00] <sumobob> could I just do 4 querys and aggregate the results?
[17:48:19] <kurushiyama> sumobob: Nah, not quite. So a document would describe a user, his location and the distance he or she would be willing to travel?
[17:48:28] <sumobob> exactly
[17:49:12] <sumobob> the user who searches has the same schema and I'm doing a $near: with their location to at least sort the results
[17:54:39] <kurushiyama> sumobob: As per logic: if the user doing the search is willing to travel the distance between the users, isn't the problem already solved? The user doing the search is required to travel, then. No?
[17:55:38] <sumobob> the user who is searching needs to find people who will come to them
[17:56:00] <kurushiyama> sumobob: Ok. Hm.
[17:56:31] <sumobob> yeah its tricky, the way i have it now I just calculate the distance from each person to the user, then i filter that array based on the travel_range
[17:56:34] <sumobob> and it works
[17:56:40] <sumobob> but as you can see this wil not scale
[17:57:14] <kurushiyama> I might have an idea. Gimme a few.
[17:59:23] <kurushiyama> Yup, should work: You could do an aggregation, use https://docs.mongodb.com/manual/reference/operator/aggregation/geoNear/, put out the distance as a field, then do a redact comparing distance and the returned docs "willingness to travel this distance" field.
[18:02:20] <sumobob> awesome that looks great, whats a redact?
[18:03:25] <sumobob> nvm see it in the docs
[18:03:29] <sumobob> kurushiyama :+1:
[18:04:15] <kurushiyama> sumobob: Aggregation pipelin is probably the feature in MongoDB I love the most.
[18:07:18] <kurushiyama> sumobob: But to answer your question: it is another aggregation pipeline stage command.
[18:07:51] <sumobob> so it looks like I do a $geoNear with distanceField: 'calculated_distance', then a $redact: { $cond: { if: { $gt: ['calculated_distance', 'travel_range'] }, then: '$$PRUNE', else: '$$DESCEND' } }
[18:10:14] <kurushiyama> sumobob: Looks about right. Not using redact too often, I am not sure about $$DESCEND, since you are not going to eval anything else. So $$KEEP might be better, here. Simply try.
[18:11:14] <sumobob> https://gist.github.com/RobAWilkinson/deca5bb7f41755f199599847b5f58852
[18:12:16] <kurushiyama> sumobob: Sorry, have to prep dinner. Simply try. If in doubt, remove the $redact stage first, to make sure the output of the first stage is correct.
[18:22:24] <MacWinner> hi, i'm trying to get a better idea of how deleted files in gridfs work (With wired tiger).. if I delete a file, then all the chunks are deleted, that space is not reclaimed I know.. however, if a new file is added, are the new chunks inserted into spots where the old chunks were? does the new file need to be exactly the same size?
[18:25:46] <kurushiyama> MacWinner: Basically, GridFS is only a convention.
[18:26:09] <kurushiyama> MacWinner: With all other respects, it behaves exactly the same as the underlying storage engine.
[18:26:15] <MacWinner> yeah.. I think the only reason I mention it is because by convention the chunk sizes are fixed
[18:26:35] <kurushiyama> MacWinner: As is the doc size ;)
[18:26:56] <cheeser> the max doc size anyway
[18:27:47] <kurushiyama> cheeser: Huh? A gridfs chunk of lets say 1kb does not take up 16MB, does it?
[18:27:52] <MacWinner> so if after deleting a 1MB file from GridFS (which deletes the fs.files and fs.chunks documents) then I add a 2MB file to GridFS.. will the original 1MB be reclaimed/used?
[18:28:14] <kurushiyama> MacWinner: Nafaik
[18:28:31] <kurushiyama> MacWinner: because the chunk doc would not fit the old location.
[18:28:52] <cheeser> kurushiyama: no, it doesn't
[18:29:23] <kurushiyama> cheeser: I would have been more than a little suprised.
[18:29:40] <cheeser> whether a particular patch of disk space gets reused is a function of document padding and the storage engine.
[18:33:05] <kurushiyama> Hm, which, as per "documents are never fragmented" paradigm would make it impossible for 1MB space to hold that new 2MB chunk doc, and hence the datafile would get expanded, if I am not mistaken. Well, with wT, compression might be of interest, too.
[18:42:04] <MacWinner> cheeser, sorry, could you give me an example of where the patch of disk would get reused?
[18:42:21] <MacWinner> or maybe a pointer to a doc that explains some examples for better understanding
[18:45:25] <cheeser> space on the disk is allocated based on availability. either in the already allocated slab on disk or after extended to the storage space.
[18:45:58] <cheeser> mmapv1 won't return disk space to the OS (thus continuing to grow and reuse as needed) without running an explicity repair()
[18:46:13] <kurushiyama> MacWinner: https://www.mongodb.com/presentations/a-technical-introduction-to-wiredtiger
[18:46:16] <cheeser> WT will release unused space back to the OS when possible
[18:46:47] <MacWinner> thanks!
[18:47:13] <MacWinner> so basically with wiredtiger, I don't need to concern myself with massive inefficiencies if I have a lot of creation/deletion of files in gridfs?
[18:47:32] <MacWinner> kurushiyama, thanks. I'll check it out!
[18:47:49] <kurushiyama> MacWinner: not as far as disk space is concerned, as far as I know.
[18:48:18] <kurushiyama> MacWinner: One could ask of the application efficiency if you have a lot of deletes ;)
[18:48:24] <cheeser> right. unused extents are returned to the OS.
[18:49:50] <MacWinner> gracias
[18:51:53] <kurushiyama> cheeser: To get this straight in my head: Said returning of unused extends only applies to extends at the end of the datafile, right?
[19:20:10] <cheeser> kurushiyama: what is the "end" of a collection of files?
[19:38:03] <kurushiyama> lets say we have like 5 blocks X|X|X|X|X, where X denotes a used MB in the data file. Now, if a file is deleted, say we have X|0|X|X|X. The 0 part would not be returned, if I get it right. Space would only be returned if the data at the end of the file would be deleted, resulting in X|X|X|X|0. Then, it would be "truncated" to X|X|X|X. But that is just a theory.
[19:39:58] <kurushiyama> cheeser: ^
[19:40:17] <cheeser> oh, sure.
[19:40:39] <cheeser> the exact mechanics of returning such space i leave to any docs on the storage engine of your choice.
[19:40:58] <cheeser> a repair will consolidate space and eliminate empty gaps
[19:41:18] <kurushiyama> cheeser: Not that I care too much. wT seems to be highly efficient regarding disk space, and I do not use mmapV1 any more ;)
[19:41:25] <cheeser> same here
[19:42:08] <Sagar> anyone here?
[19:45:57] <hardwire> cheeser: you made me say "yip yip" to my dog.. who refused to get off my couch.
[19:47:12] <Sagar> anyone can help me
[19:47:27] <Sagar> i used to have php5 and my web apps uses legacy version function like MongoClient and everything
[19:48:18] <Sagar> now i am on php7, is there any library or ext i can make the old web apps, work
[19:49:26] <hardwire> Sagar: does phpinfo() show you have php_mongo?
[19:49:41] <Sagar> its has the new driver
[19:49:46] <hardwire> ah
[19:49:49] <Sagar> which i installed using pecl install mongodb
[19:50:02] <hardwire> sounds like you have a lot of work to do :)
[19:50:05] <hardwire> I bet it's new for a reason
[19:50:18] <hardwire> php7 is a big jump anyways
[19:50:22] <Sagar> yes
[19:50:35] <Sagar> isn't there any library or ext i can use with php7 to make the old ones work?
[19:50:39] <cheeser> hardwire: i say that to my kids all the time. ;)
[19:50:49] <hardwire> cheeser: what php are your kids running on?
[19:50:58] <cheeser> "yip yip"
[19:50:58] <hardwire> oh.. yip yip.
[19:50:58] <hardwire> nm
[19:51:00] <cheeser> :D
[19:51:43] <hardwire> Sagar: http://php.net/manual/en/class.mongoclient.php <- no worky?
[19:51:56] <hardwire> that's pecl mongo (not mongodb)
[19:52:25] <Sagar> yes
[19:52:36] <Sagar> php 5.x has mongoclient pecl mongo
[19:52:38] <hardwire> looks like the newer version is preferred.
[19:52:46] <Sagar> php 7.x has new pecl mongodb
[19:52:50] <Sagar> now i am on php 7.x
[19:52:50] <hardwire> pecl:mongodb vs pecl:mongo
[19:53:07] <hardwire> I imagine you should be able to install pecl:mongo ?
[19:53:09] <Sagar> how can i make pecl mongo work on new
[19:53:11] <hardwire> if not.. you're hozed.
[19:53:24] <Sagar> pecl/mongo requires PHP (version >= 5.3.0, version <= 5.99.99), installed version is 7.0.4-7ubuntu2
[19:54:37] <hardwire> Sagar: I'm guessing you'd only need to spend half a day replacing everything
[19:54:39] <hardwire> Give it a try.
[19:54:58] <Sagar> I think not
[19:55:05] <hardwire> I can tell you right off the bat.. php7 looks like a pile of guano.
[19:55:07] <cheeser> heh
[19:55:09] <Sagar> 3K lines of code, more than 40 files,
[19:55:16] <cheeser> bat. guano. well done, sir.
[19:55:20] <hardwire> Sagar: wut! That's why we love sed.
[19:55:37] <hardwire> Sagar: pay me .. $1m.. and I'll spend 2 afternoons on this.
[19:55:40] <Sagar> hardwire, isn't there any lib or ext i would use to run the old code?
[19:55:52] <hardwire> Sagar: in truth. I don't use PHP. I just like randomly supporting people.
[19:56:14] <Sagar> :|
[19:56:14] <hardwire> I use the power of what should be obvious and attempt to distill it down to IRC context.
[19:56:16] <Sagar> cheeser?
[19:56:27] <cheeser> Sagar: oui?
[19:56:31] <hardwire> It seems like there were big changes you're being forced into.
[19:56:32] <Sagar> can u help?
[19:56:38] <Sagar> hardwire: yes :/
[19:56:39] <cheeser> i don't php
[19:56:41] <hardwire> OH FINE! I'm of no use now!
[19:56:54] <hardwire> Like I said. $1m USD. Negotiable.
[19:56:54] <Sagar> how are u an op in here then? :3
[19:57:08] <hardwire> cheeser: did you cheet on your op test?
[19:57:09] <Sagar> $1m USD :3
[19:57:10] <cheeser> because this is a mongo channel and not a php channel?
[19:57:14] <Sagar> i woulld pay $1
[19:57:20] <Sagar> no m
[19:57:22] <hardwire> $1k?
[19:57:26] <Sagar> lel
[19:57:27] <hardwire> see.. what a savings!
[19:57:32] <Sagar> $1
[19:57:36] <hardwire> g?
[19:57:48] <Sagar> :|
[19:57:50] <hardwire> k and g appear to be the same in USD
[19:58:14] <hardwire> I would love a kilodollar or gigadollar.
[19:58:21] <kurushiyama> Sagar: Just out of curiosity: Why did you change to php7, the first place?
[19:58:23] <hardwire> Help me get my truck fixed!
[19:59:34] <Sagar> mirgrating server
[19:59:43] <Sagar> DS1 in France to DS2 in USA
[19:59:48] <Sagar> DS1 = Ubuntu + PHP5.x
[19:59:57] <Sagar> DS1 = Ubuntu 14.04 + PHP5.x
[20:00:07] <Sagar> DS2 = Ubuntu 16.04 + PHP7.x
[20:00:15] <Sagar> DS = Dedicated Server
[20:00:27] <hardwire> You sir.. need a server admin
[20:00:29] <hardwire> not a library.
[20:00:33] <hardwire> or maam
[20:00:39] <Sagar> Sir
[20:00:46] <hardwire> I'll go with 'there'
[20:00:47] <hardwire> you there.
[20:00:55] <kurushiyama> Sagar: Sysadmins have an adamantium rule "Never touch a running system". If 14.04 and php5 work, clone that setup.
[20:00:56] <Sagar> :3
[20:01:03] <Sagar> :|
[20:01:28] <hardwire> I need a tattoo.
[20:01:31] <hardwire> Full Stack 4 Life
[20:04:01] <Sagar> :|
[20:04:38] <kurushiyama> Sagar: Would you have any advantage of using 16.04 _now_?
[20:05:02] <hardwire> well it's not difficult to throw another version of PHP on 16.04.
[20:05:24] <hardwire> butchagotta compile it.
[20:05:25] <Sagar> kurushiyama: LTS
[20:05:29] <Sagar> 14.04 is to an end
[20:05:35] <kurushiyama> Sagar: whut?
[20:05:40] <hardwire> Sagar: it's not ended
[20:05:41] <hardwire> at all
[20:05:45] <Sagar> It will soon
[20:05:47] <hardwire> that's silly talk.
[20:06:02] <Sagar> Also, the major point is to opt with new stable updates
[20:06:06] <kurushiyama> Sagar: 2019
[20:06:19] <kurushiyama> Sagar: Thats not exactly "soon"
[20:06:22] <hardwire> Sagar: I'm not entirely convinced it'd be noticable what those updates are.
[20:06:24] <Sagar> can this help me? https://github.com/mongofill/mongofill
[20:06:32] <kurushiyama> Sagar: In IT terms, 3 years translates to "ages"
[20:06:59] <hardwire> Sagar: stick with 14.04 LTS and PHP5 if that's what works.. it'd be folly to do anything else if you're up against a big rewrite.
[20:07:10] <hardwire> now if PHP5 is coming to an end.. that's different.
[20:08:00] <Sagar> It already has u can see
[20:08:28] <Sagar> This package has been superseded, but is still maintained for bugs and security fixes.
[20:08:39] <kurushiyama> Sagar: Let me sum it up: You'd rather do an "on-the-fly" rewrite of the persistence part of your application (hence most likely unplanned and untested) to use a version of the OS with no more of an advantage than "new updates" than to use an OS which will be supported until 2019, which is proven to work for you?
[20:10:09] <kurushiyama> Sagar: If I were you, I'd do the migration to a known good platform, plan a migration to php7 then and if you have done your rewrite and tested it, plan an according update of the server OS. Just a suggestion.
[21:27:27] <shlant> hi all. Can you LOWER log verbosity? as in only log warning/errors?
[21:27:46] <kurushiyama> adnkhu: You might want to either answer your own question on dba.so or remove it.
[21:28:26] <kurushiyama> shlant: Sure. Use syslog and configure it accordingly. Writing to logfiles directly is devil's work, anyway.
[21:34:06] <shlant> kurushiyama: yea I am currently running it through fluentd to ES, so I was asking if it can be done at the mongo level, but I guess I will have to do it at the fluentd level
[21:44:21] <Derick> btw - for people using PHP 7 and not wanting to spend a lot of time upgrading from ext-mongo to ext-mongodb, there is: https://packagist.org/packages/alcaeus/mongo-php-adapter
[21:50:45] <kurushiyama> Derick: Thanks. Bookmarked for giving it as a hint.
[21:54:12] <Derick> no prob :)
[22:15:19] <irdan> I need to set one of the nodes in my mongodb cluster as primary, and I don't know the the password to any mongodb admin account
[22:15:34] <irdan> I have root access on every node in the cluster, but the initial setup of the cluster wasn't well documented
[22:16:19] <irdan> can anyone tell me how to reset the admin account or create a new one with admin privs?
[22:17:21] <kurushiyama> irdan: replset or sharded cluster?
[22:17:31] <irdan> kurushiyama: replset
[22:17:49] <kurushiyama> irdan: Why do you need to set the primary manually?
[22:17:53] <irdan> auth is turned off in the mongo config, but I can't even list the users of the db
[22:17:59] <irdan> because it's not doing it by itself
[22:18:07] <irdan> every node in the cluster is secondary
[22:18:17] <kurushiyama> irdan: That has a reason.
[22:18:35] <kurushiyama> irdan: A primary gets elected automatically, unless something prevents that.
[22:18:56] <kurushiyama> irdan: And this something usually has nothing to do with authentication.
[22:19:44] <kurushiyama> irdan: furthermore, the auth info is not necessarily stored in the same DB as the one you want to access.
[22:20:02] <kurushiyama> irdan: Usually, administrative users are stored in the database admin
[22:20:26] <kurushiyama> irdan: so you have to do "use admin" and check for users there.
[22:20:37] <kurushiyama> irdan: There, you can reset the password.
[22:21:09] <kurushiyama> irdan: But still there is a reason why all of your nodes reverted to primary. Can you pastebin the output of rs.Status()?
[22:25:19] <Boomtime> @irdan: what are you going to do to "set one of the nodes ... as primary"?
[22:25:38] <kurushiyama> irdan: Sorry, got disconnected.
[22:26:21] <irdan> kurushiyama: all of my nodes are secondary, and rs.status gives me "errmsg" : "not authorized on admin to execute command { replSetGetStatus: 1.0 }",
[22:28:08] <kurushiyama> irdan: Gimme a sec
[22:29:42] <irdan> kurushiyama: thanks, I'm digging through the application logs right now.
[22:29:59] <irdan> and by app logs, I mean mongodb logs
[22:30:31] <kurushiyama> irdan: Ok, here is how I would reset the password
[22:31:15] <kurushiyama> irdan: Disclaimer: READ FIRST and NO GUARANTEES
[22:31:27] <kurushiyama> irdan: Stop all instances but 1
[22:31:43] <kurushiyama> restart that one instance, make sure auth is disabled
[22:31:52] <kurushiyama> dig up the database
[22:32:00] <kurushiyama> holding the auth info
[22:32:11] <kurushiyama> change the credentials
[22:32:36] <kurushiyama> restart with auth enabled.
[22:32:44] <kurushiyama> Restart other nodes.
[22:34:21] <irdan> kurushiyama: what do you mean by 'dig up the database'?
[22:34:59] <kurushiyama> irdan: Actually , every database can hold any credential, depending on your version.
[22:35:56] <kurushiyama> irdan: but "use admin" is a safe bet ;)
[22:36:44] <irdan> thanks kurushiyama
[22:36:57] <irdan> I looked in the mongo logs and noticed that right after restarting I see: [IndexRebuilder] ERROR: mmap private failed with out of memory. (64 bit build)
[22:37:52] <irdan> and I definitely have 20+GB ram free and almost a T of free disk on that host
[22:38:05] <Lonesoldier728> Hey how do I make this query work... .find({$or: {spotlight: {$exists: false}}, {spotlight: 0}, {spotlight: null}})
[22:38:38] <kurushiyama> irdan: Well, I can not say much here. Remote debugging is kind of hard... ;)
[22:38:42] <Lonesoldier728> Trying to say give me back anything that is not spotlight: 1 might be easier
[22:39:08] <kurushiyama> Lonesoldier728: Well, there is a $not operator...
[22:39:26] <irdan> kurushiyama: hehe, no problem. thanks for your help. If I run into a dead end with this I'll try to do what you outlined above
[22:39:26] <Lonesoldier728> $not: spotlight: 1
[22:39:31] <Lonesoldier728> should work kurushiyama?
[22:40:08] <kurushiyama> Lonesoldier728: well, except for a bracket here and there: yes
[22:40:16] <Lonesoldier728> find({$not: {spotlight: 1}}) ? Am I including the wrong bracket
[22:40:30] <kurushiyama> Lonesoldier728: Try ;)
[22:40:49] <Lonesoldier728> I tried with the or gave me a bunch of errors
[22:41:59] <kurushiyama> Lonesoldier728: Such as...
[22:55:40] <Lonesoldier728> let me see trying this out
[22:57:39] <Lonesoldier728> Well that did not work
[22:58:16] <Lonesoldier728> It is returning nothing... now just to give you an idea spotlight could also not exist so is there a way to add or with the not for not exists
[23:21:46] <jokke> hi
[23:22:09] <jokke> i'm trying to start mongod on a server but it fails without any message and exit code 100
[23:22:13] <logix812> with WiredTiger does the Power of 2 allocation strategy still come into play as it did with MMAP?
[23:23:43] <jokke> here's my config file
[23:23:46] <jokke> https://p.jreinert.com/1NP/
[23:28:46] <Boomtime> @jokke: file permissions - check permissions of your dbpath and other paths
[23:29:34] <Boomtime> @logix812: no
[23:29:36] <Boomtime> -> https://docs.mongodb.com/manual/reference/command/collMod/#usePowerOf2Sizes
[23:29:48] <logix812> Boomtime: thanks!
[23:30:25] <logix812> LOL! the one spot I didn't check...
[23:30:27] <logix812> naturally
[23:31:00] <logix812> I was reading the storage engine docs and a I saw collMod but didn't follow the link
[23:38:23] <jokke> Boomtime: indeed. the dbpath didn't exist
[23:38:40] <jokke> thanks
[23:38:45] <Boomtime> cheers
[23:51:43] <jokke> mongos doesn't seem to support multiple bindIp values
[23:51:58] <jokke> at least it's only listening on the first one