[00:04:43] <pbryan> On 2.6.6, I seem to be successfully using an index with multiple keys, two of which contain arrays of strings. This seems to contradict the documentation. Query plan explains I am using the index. How is this working?
[00:06:50] <morenoh149> mordof: the docs are pretty good. I think you store your geo data as normal and then use a 2d/sphere index to make looking them up faster
[00:09:15] <Boomtime> pbryan: can you provide a query with explain as a gist/pastebin?
[00:17:58] <pbryan> So, it's clearly using the index.
[00:18:04] <pbryan> And there are two arrays in the query.
[00:19:08] <mordof> i'm seeing a lot of lat/lon used with the more recent mongodb indexes - i don't want earth lat/lon coordinates, is it still possible for me to use newer indexing?
[00:21:48] <pbryan> According to the docs, with an index on two fields, you can't even insert if both fields have arrays.
[00:22:04] <pbryan> If you attempt to insert such a document, MongoDB will reject the insertion, and produce an error that says cannot index parallel arrays.
[00:22:15] <pbryan> (last line is a quote from http://docs.mongodb.org/manual/core/index-multikey/)
[00:22:37] <Boomtime> yes, that is normally true, not sure yet what is going on in your case
[00:26:33] <Boomtime> if you index without the 2dsphere the index is rejected if the document exists already, and the insertion is rejected if the collection was empty first
[00:33:25] <mordof> trying to find out if i can use the newer mongo 2dsphere / geojson indexes with coordinates that aren't lat/lon, and shouldn't be calculated based on it being on earth... or .. whatever it does
[00:34:02] <mordof> the "legacy" 2d indexes would seem to be what i actually want, but some articles have suggested those might be getting removed since they're being labelled as legacy
[00:34:03] <Boomtime> pbryan: will you raise a server ticket? alternatively, do you mind if i use your data with minor changes to raise one?
[00:34:52] <Boomtime> p.s the time field is irrelevant
[00:36:48] <pbryan> Boomtime: I'll be happy to open a ticket.
[00:37:13] <pbryan> Yeah, I experimented with dropping the time field as well.
[12:06:17] <gundas> Hi all, I have one node js application which is generating random data and putting it into a capped collection and a uncapped collection. I also have another node js application which is reading from the same database and tailing the capped collection and then sending data through sockets to the ui. I;m using mongoose but confused about recreating the schema any ideas on the best way to set this up?
[12:08:13] <Tailor> Social-networking: Hadoop, HBase, Spark over MongoDB or Postgres? - http://stackoverflow.com/q/27730628
[12:09:46] <gundas> dimon222: what do you mean two arrays?
[15:38:10] <capleton> Hey, does anyone here have experience using "subdocuments?"
[15:50:36] <cheeser> ask your question and see if anyone knows.
[19:25:15] <Industrial> Hi. Using the NodeJS driver for mongodb, can I get an update event from when something from selected collections update?
[19:25:42] <Industrial> I would like to create a mechanism for a browser to subscribe to a collection or model for updates, and gets this streamed through a websocket.
[19:31:28] <Industrial> or would that be totally conter-intuitive to scaling?
[19:59:14] <capleton> I'd like to create new subdocuments using mongo, but when I try to use insert() with $set, i can only manage to overwrite the existing subdocument. Can someone point me in the right direction?
[20:21:51] <polhemic> I have a random and possibly noobish questions about journal files. Why do mine persist for long (hours) periods of time?#
[20:22:23] <polhemic> I thought they only needed to exist until the change were applied to the shared view and remapped every minute or so.
[20:23:38] <polhemic> But for me it keeps growing - currently the journals are twice the size of the sum of the *.[012] files
[20:46:05] <edrocks> polhemic: mongodb allocates twice the amount of your current data size
[20:46:25] <edrocks> polhemic: and every time you hit the amount it allocated it doubles again
[20:46:52] <polhemic> even in the journal files? They appear to be slowly increasing in size
[20:47:30] <polhemic> but I was expecting them to hit a timeout or threshold and then drop to zero again when the entries were commited to the database files
[20:48:11] <polhemic> The journal directory started at 0 and has been climbing slowly
[20:48:25] <edrocks> not exactly an expert on this but i will see if i can find the right docs about the files doubling i think they should have more info
[20:55:46] <polhemic> thanks, edrocks, I'll look at it now
[21:05:30] <polhemic> There's nothing very helpful on that page. Following through to the journaling page, there's the sentence "Once MongoDB applies all the write operations in a particular journal file to the database data files, it deletes the file, as it is no longer needed for recovery purposes."
[21:05:46] <polhemic> Which, to me indicates that the journal file shouldn't be there
[21:11:14] <polhemic> So, my concern is that something I'm doing client side is stopping the journal getting processed and removed after the write operation is complete
[21:18:05] <Boomtime> polhemic: what is the problem you are observing? (i did not see the start of the conversation)
[21:19:43] <polhemic> no problem as such, just concerned about the size of my journal directory continuously climbing.
[21:20:32] <polhemic> I'm running a write heavy app, and the journal just keeps growing - it's at almost 1GB after 3 hours of runtime
[21:21:23] <polhemic> Once the writes have gone through to the database files, I was expecting it to drop in size - containing only the very latest changes that haven't gone into the shared model yet
[21:26:20] <Boomtime> polhemic: afaik journal files are constantly recycled while mongodb is running
[21:26:39] <polhemic> My current db directory looks like this:
[21:27:40] <polhemic> journal just keeps going up and, because I've got an update heavy dataflow, the db size will asymtote to a final value, but the journal will just keep growing and growing
[21:31:29] <polhemic> As it stands, I'm creating a persistent pymongo.MongoClient(), then I just keep calling update() on a collection (with upsert true)
[21:32:54] <Boomtime> i expected it to do that much sooner
[21:33:46] <polhemic> I've just tried closing my client and the journals haven't been removed. I'm just going to restart mongod to check they get removed on shutdown.
[21:34:12] <Boomtime> also, do a permissions list of the main db files
[21:35:11] <polhemic> Every 2.0s: sudo ls -lR /var/lib/mongodb/ Thu Jan 1 21:33:25 2015
[21:42:11] <polhemic> The docs talkabout running a write flush every 60 seconds, so you have to be shifting some serious data to hit 1GB with 60s of writes. Unless you're me.
[21:42:28] <Boomtime> polhemic: no, the journal is write only
[21:42:45] <opmast> Boomtime is only for arrays no ? _id is array?
[21:43:17] <Boomtime> opmast: $in specifies an array of items to match, you have 10 items to match, thus $in is perfect
[21:45:45] <Boomtime> polhemic: the journal is not read except at start-up (crash recovery), so it always writes ahead - wait for a minute or two after a new file is allocated and see if the old one disappears then
[21:46:23] <Boomtime> note it may take more than a minute after the new file is created, due to overlap - wait at least two minutes (or a bit longer) to be sure
[21:46:43] <polhemic> It's still there and it's still growing. Up to 45MB now since the restart
[21:48:15] <Boomtime> yeah, that amount is normal - i have just learned that the limit is 1GB - i thought it was less than that
[21:48:51] <polhemic> But at 1GB it just creates a new journal file. The only time it ever removed the journals is when the server is rebooted.
[21:49:43] <polhemic> This means that either my server has to have a periodic restart to clean out used journals, or I've got to turn journaling off (which I really don't want to do)
[21:49:59] <polhemic> I'll leave it running overnight, no problem.
[21:51:27] <polhemic> VPS has 26GB free, so that's around 75 hours before the drive fills up
[21:53:56] <opmast> Boomtime how to delete _id when i make _id : 54a57bd848177e240d8b4567 it not work
[22:22:59] <Boomtime> polhemic: it may end up using the same amount if that is what is required, but it will also cause the files to be available for recycling sooner
[22:23:46] <Boomtime> what is happening is that the previous journal file is not being made available for recycling until the current one is full
[22:24:17] <Boomtime> thus it sits around taking up space until a new one is needed, then it is reset and recycled for use as the new journal file
[22:24:48] <Boomtime> thus, 2GB is not a hard limit - but it's unlikely you'll be able to push it above that
[22:26:38] <polhemic> I'll let it run tonight and see what happens
[22:40:21] <Nitax> does Mongoose/MongoDB persist the order of an Array SchemaType?
[22:43:51] <Boomtime> Nitax: mongodb persists the order of arrays, but i do not know about mongoose
[22:44:30] <Nitax> Hmm. I would assume that Mongoose wouldn't change that but it would be nice to be 100% sure before I assume I don't need to keep track of it myself