PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Saturday the 7th of February, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:12:44] <crackmonkey> I've got a really dumb, using-mongo-for-10-minutes kind of question. Can the $gt-style operators not be applied to nested documents?
[00:13:08] <crackmonkey> i want to do something like this: db.metrics.find({ heart: { beats: {$gt: 9} } }) but it gives me 0 results
[00:13:19] <crackmonkey> but if I do an exact search for 10, for example, I get a bunch of docs.
[00:14:59] <joshua> crackmonkey: You can use dot notation so you can do heart.beats
[00:15:34] <crackmonkey> joshua: I just found that, it works. It seems more like a "must" instead of a "can", though.
[00:17:13] <crackmonkey> is there a reasonable technical explanation for that, or just a weirdness of the tool?
[00:20:26] <joshua> Its been a while since I dealt with that kind of thing, so I can't really explain it myself :)
[00:22:11] <crackmonkey> heh, fair enough. thanks for the help.
[00:30:55] <greenie_> Hello. Is Mongo used (as a good practice) for small databases ever?
[00:31:27] <greenie_> I have begun using it for a project where the structure of each record can vastly differ, and it seemed much easier to do it with mongo than figuring out a relationsal scheme
[00:32:30] <greenie_> The number of records in each collection, however, isn't likely to climb above 5,000 or so
[02:06:51] <morenoh149> hey all
[02:52:11] <jayjo> I'm using mongoengine to work with mongo, and I've updated my model. I already have documents that now don't adhere to this new change. Do I use 'mongo' to interact with the db (write quick code to add a field based on other fields) or does mongo only provide a cursor?
[02:52:39] <jayjo> Or do I use a driver, say pymongo and do the work directly with that
[02:52:59] <jayjo> At this point it's only double digit documents, but in the future this could be a problem
[03:23:56] <jayjo> I think pymongo is the way to go for this task
[03:58:14] <jayjo> What's an upsert?
[03:58:23] <jayjo> update/insert?
[03:58:52] <cheeser> update if it's there, insert otherwise
[03:59:37] <jayjo> cheeser: thanks. using this method, can I reference my document via self?
[04:00:13] <cheeser> what?
[04:00:52] <jayjo> I want to update a document with a new field that depends on another field. Is there some sort of self equivalent?
[04:01:23] <jayjo> bulk.find({}).upsert({'$set': {'slug': slugify(self.category)}}) is what I'm after I think
[04:02:56] <jayjo> is forEach the only way to do this?
[04:14:30] <cheeser> yeah, you'll have to forEach()
[05:38:58] <Roots47> Hey guys, I just installed mongo db on an ec2 instance, and opened the TCP port 27017, but when I try to connect in my express app via mongoose.connect('mongodb://ec2-52-0-106-233.compute-1.amazonaws.com/dbname'); it fails to connect.
[05:40:00] <morenoh149> Roots47: specify the port number
[05:40:57] <Roots47> morenoh149: same error
[05:41:26] <morenoh149> Roots47: check your firewall for that port
[05:41:49] <morenoh149> test connecting to a mongolab free instance to sanity check
[05:42:29] <cheeser> if your config is set up to bind to 127.0.0.1 or localhost, you won't be able to be connect remotely
[05:44:13] <morenoh149> is mongodb suitable for real time chat application?
[05:44:22] <cheeser> sure
[05:44:36] <morenoh149> good
[05:44:53] <morenoh149> was about to use firebase for this project
[05:46:05] <Roots47> morenoh149: ok, it works with a mongolab instance
[05:46:17] <Roots47> cheeser: I think that's it... the default config uses localhost, right?
[05:46:43] <cheeser> correct
[05:49:12] <Roots47> cheeser: cool, just bound ip to 0.0.0.0 now it works :) thx dude.
[05:50:22] <cheeser> np
[05:54:54] <joannac> by alex, ryder
[06:12:56] <morenoh149> db.zips.aggregate([{ $group: { _id: '$state', population: { $sum: '$pop' }}}])
[06:13:15] <morenoh149> ^ does the square bracket matter?
[06:17:13] <morenoh149> looks like array syntax is optional http://docs.mongodb.org/manual/reference/method/db.collection.aggregate/#db.collection.aggregate but necessary if you choose to also pass an options arg
[08:59:54] <esko> moin, i asked last night about updating a document if the array is empty, could someone help ;-), trying to grasp the basics.
[09:05:02] <joannac> what's the question?
[11:02:16] <Forest> hello
[11:02:55] <Guest66200> may i ask you why mongod still uses resources after i clsoed the session?
[11:03:21] <joannac> Guest66200: ?
[11:04:04] <Guest66200> joannac: i was inserting data into mongodb and even after i succesfully inserted them and created indexes,still 8 GB of RAM is used by mongodb process
[11:04:16] <joannac> how are you determining it is still using resources?
[11:04:47] <Guest66200> when i see windows manager i see it uses 6 GB of RAM
[11:04:54] <joannac> okay
[11:05:18] <joannac> and you think this is unexpected why? the process is still running, is it not?
[11:05:56] <Guest66200> i expected it to just insert data on disk and free up the memory
[11:06:23] <Guest66200> when the mongodb is started it doesnt use much working memory
[11:06:33] <Guest66200> and there are data stored in binary form
[11:06:54] <joannac> Guest66200: that's not the way any program works
[11:07:08] <joannac> programs keep as much data in memory as they can
[11:07:51] <Guest66200> joannac: i understand,but why didnt it keep all data in memory when i start the service?
[11:08:05] <joannac> um
[11:08:34] <Guest66200> it should keep data in memory when it needs to read from the disk,but no when its idle
[11:08:37] <joannac> because when you start a program, it doesn't go "oh I'm going to take all the memory because I can"
[11:09:10] <joannac> because that's a jerk move
[11:09:29] <Guest66200> so you telling me that if i inserted more than 15 million records my PC would blown up because i won´t have enough memory?
[11:09:34] <joannac> no
[11:09:40] <joannac> are you serious?
[11:09:57] <joannac> maybe you should look up how computers, programs, and how OS memory management works
[11:10:28] <Guest66200> i just wonder why am i using mongodb when i could just load all the data in memory using just C++ library
[11:10:52] <Guest66200> my point was storing it on disk because on bigger data it doesnt fit all in the memory jesus
[11:10:55] <joannac> because you want a database?
[11:13:18] <Guest66200> i just don´t get it,that´s why we are all learning to use effective data structures so we don´t have to read all the data in memory to find what we are looking for
[11:13:25] <joannac> correct
[11:13:26] <Guest66200> and you telling me its OK its all in the memory
[11:13:30] <joannac> same with mongodb
[11:13:54] <joannac> reading from memory is faster than reading from disk
[11:13:59] <Guest66200> i just don´t get it why its all in memory when i terminated the session
[11:14:13] <joannac> explain what you mean by "terminate the session"?
[11:14:17] <joannac> terminate the shell?
[11:14:23] <Guest66200> yes
[11:14:26] <joannac> okay
[11:14:31] <joannac> so your session has finished
[11:14:49] <joannac> the database server has 2 options
[11:15:10] <joannac> 1. no one will ever want this data again, so I'm going to free this memory right now
[11:15:24] <joannac> 2. maybe the next user will also want this data, i'll keep it in memory
[11:15:53] <Guest66200> okay and how long is his timer to realize he should free it up?
[11:15:56] <joannac> never
[11:16:13] <Guest66200> isnt there any command i can force him to?
[11:16:38] <joannac> if another application says "i need memory", the OS will go "well that mongod process hasn't used its memory in a while, I'm going to take some off of it"
[11:16:44] <joannac> no. why would you need to?
[11:17:02] <Guest66200> okay i tell you how imagined it would work
[11:17:20] <Guest66200> 1) Insert data,take the memory it needs to do it,tahts okay
[11:17:30] <Guest66200> 2) Free up the memory,data is stored on disk
[11:17:48] <joannac> okay. well your understanding is incorrect
[11:18:08] <Guest66200> 3) So no resources are used right now. When someone wants to look up something,take the up the memorz and keep it in memory for further use thats all right.
[11:18:40] <joannac> so in 3, you would rather wait for another disk read?
[11:18:47] <Guest66200> this is all wrong,because i need to restart the server to free up the memory and then its working like its supposed to
[11:19:05] <joannac> why do you need free memory?
[11:19:10] <kali> Guest66200: it does not cost you anything to leave the data in virtual ram forever
[11:19:28] <joannac> it costs you nothing and can only give you benefits
[11:19:48] <joannac> in the worst case, you get zero benefit but also zero cost
[11:19:51] <kali> Guest66200: the kernel job is to provide you with as much virtual memory you ask, and then choose by itself what should be actually kept in physical memory or not
[11:19:59] <joannac> at the worst case, you keep reading from disk and writing to disk
[11:20:20] <joannac> in your method
[11:20:49] <Guest66200> joannac,kali: i had this on OS and i didnt understand it very well then,i got C,i still study. I know there is physical and virtual memory and that virtul memory is using some kind of 2 tables to calculate the place in the real memory.
[11:21:39] <Guest66200> joannac,kali: so i dont understand what it means when my OS-Windows is telling me right now i have 7,82 GB memory usage, i have total 12 GB
[11:23:13] <Guest66200> kali: how do i know if its in virtual memory or in physical? maybe thats the point where i am misudnerstanding this
[11:24:10] <Guest66200> kali: task manager says what i said to you,but i also know it was created for common user. is there any way i can actually know the answer for my question? please clarify me,i want to learn something,thats the point of all i have been doing,its my bachelor thesis :)
[11:24:17] <kali> Guest66200: you may want to read this: https://www.varnish-cache.org/trac/wiki/ArchitectNotes it talks about varnish but the memory management strategy is the same in mongodb
[11:25:41] <kali> and i'm sorry i really can't help with windows
[11:26:08] <mkode> is there a way to surpass 16 MB document size?
[11:26:42] <joannac> mkode: no
[11:30:14] <Guest66200> kali: okay i will read it,but i still dont udnerstand why windows says Available memory 4,2 GB,cached 3,8 GB,free 460 MB
[11:30:45] <Guest66200> kali: he says the opposite,that actually in Windows its all stored in physical memory,maybe now i get it why windows is so wrong
[11:36:12] <Guest66200> kali:now i am really confused
[11:36:47] <kali> yeah, windows does that
[11:40:50] <Guest66200> kali: okay now i understand how it works,its reasonable and its like you told me,thats what i wanted
[11:41:08] <Guest66200> kali: its just stored in virtual memory when the program is not busy
[11:41:23] <Guest66200> kali: i still need to figure out if its bug in windows task manager or what
[11:41:58] <Guest66200> kali: because this way i am really not sure if windows considers the application to be busy or i forgot something in my code,maybe,but i closed the session properly as i am aware
[11:44:20] <Guest66200> kali: hmm acording to some paper i have read he is really using 8 GB of physical memory
[11:44:30] <Guest66200> kali: wtf
[12:14:50] <Guest66200> kali: are you here? i think i know what might be the problem. i am running the MongoDB as windows service
[12:15:38] <Guest66200> kali: can´t it be that he assumes the service needs to respond to clients so he keeps all in physical memory,he doesnt know that i am stupid admin who tries to insert and that needs not to be remembered
[13:17:08] <esko> hi guys! could someone help with a quickie.. im just starting out with mongo. http://pastebin.com/tzH88wfB i need to insert some placeholder value in the array that tags: document holds, if the array is empty
[15:04:17] <brano543> hello,anyone has an idea why mongod creates cursor for the whole data set after an insert and doesn´t close it after 10 minutes of inactivity like it should? Why does it allocate so much physical space?
[15:09:31] <brano543> i am closing the session after the session. Anyone has an idea?
[15:09:38] <brano543> *after the insert
[15:24:43] <brano543> anyone willing to help please?
[15:31:32] <brano543> I need to all the resources to be freed after i close the session. Is it possible?
[15:34:26] <brano543> its strange,i have even read the API documentation again and it says that after the session is closed it may be either put in the pool(i guess that virtual memory) or collected,depending on the case. I just dont udnerstand how this is the case for it to be collected. http://godoc.org/labix.org/v2/mgo
[17:27:34] <hahuang61> anyone know what this means
[17:27:46] <hahuang61> failed with error 13396: "error creating initial database config information :: caused by :: DBConfig save failed: { ok: 0, code: 25, errmsg: \"could not verify config servers were active and reachable before write\" }"
[17:29:20] <rc3> hi guys, need some pointers on how to do data grouping by N weeks (N>1). I know how to do it when N==1, my data is basically tick price information (OHLC) beginning from year 1999
[17:30:23] <rc3> I am thinking about doing weekly grouping first, then group by N weeks, first part is easy, later part is not
[17:32:02] <rc3> one alternative solution I've tried so far is to get the (rounded) week number based on a reference date in 1970
[17:32:39] <rc3> but that's can't guarentee the (grouped) dates fall on Fridays
[17:42:07] <rc3> see my alternative solution (which works but not perfect) at http://pastebin.com/uy3Ag0Yz
[17:46:13] <hahuang61> joannac: you around?
[17:47:57] <MacWinner> if i want to migrate a bunch of files that are organized into directories/subdirectories/images.png into gridfs, do I need ot create my own filenaming convention for concatenating the directories/subdirectories/images?
[17:48:26] <MacWinner> I don't seem to see the concept of folders or directories in gridfs. just want to confirm that's the case before I embark on reinventing the wheel
[18:12:02] <rc3> I think I nailed it, I just need to combined both methods (group weekly data first then use the alternative method mentioned above for N-week data grouping), final solution is here http://pastebin.com/n3HGdPhD
[18:35:21] <mrmccrac_> i have 36 shards in a cluster, and inserted 100,000 documents, any idea why the vast majority of them would go to shard1, and only a handful to shards 2-4? the remaining 32 shards never have docs inserted into them
[18:35:50] <mrmccrac_> all 36 are listed in a db.stats()
[18:36:50] <kali> mrmccrac_: how many chunks did it generate ?
[18:37:51] <mrmccrac_> 46 i think, 39 are in shard 0, 5 in shard 1, 1 in shard 2, 1 in shard 3
[18:38:33] <kali> mrmccrac_: look in the config database for the log collections. chances are the balancing is happening as we speak
[18:42:08] <mrmccrac_> no active migrations
[18:44:52] <mrmccrac_> http://pastie.org/9896021
[18:45:23] <mrmccrac_> i think my shard key should be very very random
[18:45:46] <mrmccrac_> and no chunks appear to be moving
[18:45:50] <mrmccrac_> dont understand
[18:47:10] <mrmccrac_> only ever uses the first 4 shards out of 36 very suspicious
[18:47:54] <kali> try to find out which mongos has the balancer lock and see if it says something in its log, maybe
[18:52:30] <mrmccrac_> 2015-02-07T18:49:12.826+0000 I SHARDING [conn2] ChunkManager: time to load chunks for interface.data: 1ms sequenceNumber: 2 version: 4|28||54d3da8662b5d4bd3fd7ddc6 based on: (empty)
[18:52:48] <kali> i have no idea what this means :)
[18:52:56] <mrmccrac_> me either
[18:53:37] <mrmccrac_> but chunks are still 39 5 1 1
[18:56:01] <mrmccrac_> maybe ill try starting from scratch
[18:57:09] <mrmccrac_> was hoping have a lot of shards would help insert distribution
[19:38:23] <hahuang61> so, I just upgraded us from mongo 2.0.4 to 2.6.5 and our response time when up by 3x... any tips on what to start looking at?
[19:38:27] <hahuang61> no app code was changed in this process.
[19:39:16] <kali> start with the slow queries in the log
[19:40:16] <hahuang61> which log?
[19:40:34] <kali> the mongod log
[19:40:52] <hahuang61> it'll log slow queries?
[19:41:20] <kali> the default settings does
[19:47:08] <hahuang61> kali: thanks, I'll check it in a sec. Still in the process of bringin up subsystem availability.
[19:52:15] <hahuang61> kali: yeah, I see these queries. I guess I should .explain() them
[19:52:19] <hahuang61> are these pasteable?
[19:54:36] <genericpersona> when i run a query in the mongodb shell i get an answer back, but when i run the same query in pymongo i get nothing back
[19:54:43] <genericpersona> anyone else experienced something like this?
[19:55:19] <medmr> you probably didnt set up your client/connection right
[19:55:37] <medmr> is there an error response that you could have missed?
[19:57:54] <genericpersona> potentially, i'll look into that
[19:57:58] <genericpersona> thanks
[20:14:54] <kali> hahuang61: use a paste service, but yeah, sure
[20:15:29] <hahuang61> kali: sorry I should have been more clear: are they straight pastable into db.collection.find(<paste_here>).explain()
[20:15:51] <kali> hahuang61: i think yes
[20:16:04] <kali> hahuang61: if that's not the case, it should be quite obvious
[20:17:50] <hahuang61> kali: yeah
[20:44:01] <hahuang61> kali: ping
[20:44:26] <kali> hahuang61: yes ?
[20:45:17] <hahuang61> kali: would you have any insights on why this happens: https://gist.github.com/hahuang65/8e5761375806c3d048a0
[20:45:44] <hahuang61> kali: basicaly in the logs, it's really slow, and it scans a TON of objects, but in our explain, it's REALLY fast, and scans only 29 objects.
[20:45:51] <hahuang61> sorry in this case, it's 1 object
[20:46:13] <hahuang61> but we tried to explain() a bunch of queries in our mongod.log that look like this, and they ALL have very fast performance during the explain
[20:46:31] <kali> hahuang61: check if you're testing with the same sort() and limit() options
[20:47:19] <hahuang61> kali: how do I check that?
[20:51:19] <kali> just look at the log line if it says soemthing about sort and/or limit
[20:51:35] <kali> and use the same setting in your explained find()
[20:51:54] <kali> another option (a little bit more invasive) is to use the built-in profiler
[20:52:06] <kali> hahuang61: http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
[20:52:20] <kali> that way, you'll get execution plan for the actual queries
[20:54:29] <hahuang61> kali: looks like there is no sort or limit in the log.
[20:55:05] <kali> ok, built-in profiler comes next i think
[20:56:14] <MacWinner> is it possible to include server specific configs into my main.cf? I have a couple parameters that a different between servers, but otherwise main.cf is the same.. I'd like to be able to sync the main.cf across servers without worrying about the server specific pieces
[20:57:07] <kali> main.cf ? wrong # i guess
[20:57:44] <MacWinner> oops..
[20:59:05] <_John_W_> Hello all. I am having some problems with MongoDB corrupting the database upon shutdown and restart.
[20:59:50] <_John_W_> I am using Ubuntu 12.04 and start-up and shutdown are happening using the normal Upstart procedures.
[21:00:14] <_John_W_> I have had to repair the database three times now so it isn't an intermittent issue, but a chronic one.
[21:00:18] <_John_W_> Any ideas?
[21:02:35] <kali> _John_W_: what mongodb version ?
[21:03:42] <_John_W_> Good question... Let's see, the mongo shell is 2.0.4
[21:04:06] <kali> wow, this is... ancient
[21:04:12] <_John_W_> And db.version() returns 2.0.4
[21:04:24] <_John_W_> I prefer the term venerable.
[21:04:49] <_John_W_> Do you think updating mongo would solve this?
[21:05:14] <kali> i'm trying to remember when the journaling options became active by default :)
[21:05:31] <_John_W_> Journaling is enabled.
[21:05:53] <_John_W_> That does make recovery a bit less painful, but the fact that I have to do it is annoying.
[21:06:46] <kali> well, something that is relatively conservative would be to bump to the latest 2.0.x
[21:06:58] <_John_W_> On a related note, recovery seems to have changed permissions a bit.
[21:07:13] <kali> 2.0.9
[21:07:19] <kali> https://www.mongodb.org/dl/linux/x86_64
[21:07:45] <_John_W_> Should the contents of /var/lib/mongodb be mongodb.nogroup or mongodb.mongodb?
[21:08:09] <_John_W_> So you think that this may be a problem with MongoDB 2.0.4?
[21:09:20] <kali> well, i can't remember a time with chronic issues on shutdown/restart once journaling was there, and i was working with a full team of developper working with their laptops
[21:09:35] <kali> but that's all i have
[21:10:09] <kali> i don't think nogroup can make a big difference, as mongodb will run with mongodb
[21:10:35] <kali> just make sure you sudo to mongodb to perform the recover
[21:10:51] <_John_W_> Yes, I would think that if this were a problem with 2.0.4 that it would have been documented. I suspect it is something I did at some point.
[21:11:08] <_John_W_> If that is true, then upgrading will probably not solve the problem.
[21:11:29] <kali> yes
[21:11:44] <_John_W_> MongoDB was working fairly reliable two years ago when 2.0.4 was new and I know a number of instances where 2.0.4 is working reliable right now.
[21:12:00] <_John_W_> Despite your adivce, I would prefer to understand the probem before I upgrade.
[21:12:14] <_John_W_> I agree about the permissions. I was just looking for validation.
[21:12:48] <_John_W_> One more issue is that this is a protopyte applicaiton that I am running on my laptop so it gets shutdown a bit more than in a more traditional installation.
[21:13:02] <_John_W_> There is no sharding or replication in the prototype.
[21:13:35] <_John_W_> Meaning it is a very simple Mongo deployment and it is very vexing to not be able to diagnose what is going wrong.
[21:14:12] <kali> do you have a log output of a failed startup ?
[21:14:20] <_John_W_> Sure...
[21:15:17] <_John_W_> **************
[21:15:18] <_John_W_> old lock file: /var/lib/mongodb/mongod.lock. probably means unclean shutdown,
[21:15:20] <_John_W_> but there are no journal files to recover.
[21:15:21] <_John_W_> this is likely human error or filesystem corruption.
[21:15:23] <_John_W_> found 1 dbs.
[21:15:25] <_John_W_> see: http://dochub.mongodb.org/core/repair for more information
[21:15:26] <_John_W_> *************
[21:15:29] <_John_W_> See, even Mongo thinks it is my fault.
[21:16:21] <_John_W_> *************
[21:16:22] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] exception in initAndListen: 12596 old lock file, terminating
[21:16:24] <_John_W_> Fri Feb 6 20:48:16 dbexit:
[21:16:25] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] shutdown: going to close listening sockets...
[21:16:27] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] shutdown: going to flush diaglog...
[21:16:28] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] shutdown: going to close sockets...
[21:16:30] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] shutdown: waiting for fs preallocator...
[21:16:31] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] shutdown: lock for final commit...
[21:16:33] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] shutdown: final commit...
[21:16:34] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] shutdown: closing all files...
[21:16:36] <_John_W_> Fri Feb 6 20:48:16 [initandlisten] closeAllFiles() finished
[21:16:37] <_John_W_> Fri Feb 6 20:48:16 dbexit: really exiting now
[21:16:43] <_John_W_> After than second bit, that is to say, after --repair, all is well and I can connect as normal.
[21:16:52] <medmr> dont paste directly _John_W_
[21:16:57] <_John_W_> Sorry.
[21:16:58] <medmr> use pastbin and link it
[21:17:00] <medmr> np
[21:17:00] <hahuang61> would $orderby and .sort() be the same operation in mongo or does it handle them differently?
[21:17:01] <kali> don't paste more than one line on IRC johnnyfive
[21:17:11] <kali> and _John_W_
[21:17:27] <_John_W_> Sorry...
[21:18:16] <kali> _John_W_: what happens when you shut down mongodb using the scripts ? does the /var/lib/mongodb/mongod.lock file disappear ?
[21:19:24] <_John_W_> Yes, when I do a manual shutdown either from the mongo shell or with upstart, all is well.
[21:19:54] <_John_W_> It only seems to have a problem when I rely on upstart to handle things automatically, which clearly I do all the time.
[21:20:35] <kali> _John_W_: can you check the log just before a fail restart ? is there the dbexit: really exiting now line there ?
[21:21:14] <kali> hahuang61: yes, sort in the shell translate to orderby modifier
[21:21:54] <_John_W_> No, the 'really exiting' line only appears when I do a manual shutdown.
[21:22:26] <_John_W_> That is why I was looking at the upstart script.
[21:22:28] <kali> and does the log mentions anything about shutting down ?
[21:22:38] <kali> because, yeah, it looks like a script issue from there
[21:23:28] <_John_W_> Yes, I agree. That is why I couched the problem in relation to Ubuntu and Upstart.
[21:23:51] <_John_W_> No, I have to go back to a clean shutdown in the log before I get the normal shutdown and really exiting messages.
[21:25:35] <_John_W_> The mongo upstart script looks like all of the other upstart scripts. It starts on runlevels 2,3,4 and 4 (even though 4 isn't used) and it shuts down on 0 and 6.
[21:25:48] <_John_W_> Pretty standard stuff.
[21:27:28] <hahuang61> kali: thanks
[21:27:39] <hahuang61> kali: looks like this is basically what we're seeing: https://jira.mongodb.org/browse/SERVER-13866
[21:27:49] <_John_W_> Do you think it is possible that the system shutdown outruns the MongoDB shutdown meaning that Mongo dawdles a bit after it gets the shutdown signal and the OS shuts down before it si done?
[21:28:19] <kali> _John_W_: it is possible
[21:29:19] <kali> hahuang61: ok...
[21:30:12] <hahuang61> kali: this query performs TERRIBLY in prod, but perfectly in mongo console with .explain().
[21:30:15] <hahuang61> makes me sad
[21:30:26] <_John_W_> Hmmm... I guess I can "instrument" the upstart script to see what is happening. Do you have any other ideas?
[21:30:41] <kali> hahuang61: I haven't read the whole case, but maybe hinting the index would help
[21:30:53] <hahuang61> kali: I'm not quite sure what that means
[21:30:57] <kali> _John_W_: nope, i think this is the way to go
[21:31:10] <hahuang61> kali: oh like, with the query, specically tell it which index?
[21:31:18] <kali> hahuang61: you can override the query optimizer and specify an index to use for the query
[21:31:35] <hahuang61> I see. yeah that'd require some app code changes, and not even sure if mongoid supports that
[21:31:40] <_John_W_> Well, I really appreciate your help, kali. Thanks for taking the time.
[21:31:57] <_John_W_> I'll be back later and if you are online, I will let you know what I discover.
[21:32:03] <_John_W_> If anything.
[21:32:03] <kali> hahuang61: that's for your rails guys to look at, i guess
[21:32:09] <_John_W_> Thanks again.
[21:32:16] <kali> _John_W_: you're welcome
[21:33:01] <hahuang61> kali: thanks. still doesn't make sense to me with console explain works perfectly, wouldn't the optimizer cache which index to use across both so that it'd use the same index?
[21:33:57] <kali> hahuang61: i don't know. to be honest, i haven't gone in prod with 2.6, so i'm not completely aware of its isiosyncrasies
[21:34:27] <hahuang61> okay, thanks for paying attention to me and my ramblings :) much appreciated!
[21:34:37] <kali> hahuang61: you're welcome
[21:50:56] <brano543> Hello. Can anyone describe me how the mongodb caching works? I need to insert a few millions of records,but they all get stored in RAM during the process and even after it succesfully inserts the last one.
[22:14:41] <brano543> Is anyone online? I need help with freeing up cached resources on every insert.
[22:29:35] <joannac> brano543: i have no idea what that means
[22:30:48] <joannac> do your few million records are stored in RAM... and you don't want them to be?
[22:31:13] <joannac> why?
[23:03:21] <FunnyLookinHat> How do I free up the space this database is using? http://pastebin.ubuntu.com/10117051/
[23:03:33] <FunnyLookinHat> I did a db.[collection].drop() but the filesize is still huge
[23:10:25] <andreassaebjoern> Hi! Is there a way to load a related document when returning results in mongodb? I have embedded the id of the related document.
[23:10:56] <andreassaebjoern> I am interested in not have to query the database for the related document