[11:13:26] <kenalex> and was wondering if my use case is a good for fit for it
[11:14:44] <kenalex> I am designing an inventory application for cell sites at work. each cell sites have similiar attributes and also attributes that unique define them
[11:15:39] <kenalex> i am also interested in logining technicain works against these sites and various system statistics from external systems for these sistes.
[12:51:47] <jamiel> The documentation states "When an index covers a query, the explain result has an IXSCAN stage that is not a descendant of a FETCH stage, and in the executionStats, the totalDocsExamined is 0." ... why would db.foo.find({_id: ObjectId("55be26451900f15070fbeb46") }).explain("executionStats"); say totalDocsExamined: 1 ?
[13:00:48] <defk0n> can someone help me out with creating this query, details >> http://pastebin.com/6cbYtqEf
[13:05:26] <defk0n> seems kind of confusing, if you say {"$unwind" : "$tags.collection"} is wrong, and i would do "tags.collection" the value of that is still a list. so it automatically resolves that ?, but group needs a $
[13:06:34] <Derick> you're right, it needs a $ too
[13:06:44] <Derick> (because as you say, it's a list of values)
[13:08:30] <defk0n> oh oke, and if i may ask a another question, how does mongo index such a structure? taking the current documents in the pastebin as example, will it store in memory all these documents based on their index key? or will it store the index key the value index points to?
[13:08:54] <Derick> in A/F, only in the first stage (the first $match) is the index used
[13:09:36] <Derick> (otherwise, I don't understand your question)
[13:09:41] <defk0n> so i cannot index nested structures?
[13:11:18] <defk0n> i havent got one yet, just thinking about where to put a index. because if every document has multiple tags.collection.type = tag field how will mongo index that
[13:12:13] <jr3> is there an efficient way of counting documents with a bool flag thats true?
[13:12:20] <jr3> will an index on the bool flag help
[13:13:35] <Derick> defk0n: it will create an index entry for each of the elements, both pointing to the full *document*
[13:14:09] <Derick> but in your A/F query, the only index that would be used is one on tags.collection.name, because that's what is used in the first $match
[13:15:55] <defk0n> Derick: but then that would mean there would be alot of duplicate indexes right? for each document a extra index eating up memory, isnt there like a reverse index, the index points to every document who such a field with value
[13:17:00] <defk0n> say every document has 100 tags, and i have 1000 documents, now its 100k indexes
[13:41:44] <jr3> What does it mean that the working set should fit in memory?
[13:42:36] <jr3> My current mongo database size is 26gigs, I only have 8 gigs of memory on my server.
[13:44:38] <davidhadas> hi, I just upgraded a mongodb from 2.6 to 3.0. It is a single server configrued with a replica set (I was hopping to add to it two more servers once I completed the upgrade)
[13:45:27] <davidhadas> use admin and than show users gets me to a "E QUERY Error: not authorized on admin to execute command"
[13:46:08] <davidhadas> the log is full of "I QUERY [conn1] assertion 13 not authorized for query... " errors
[13:46:41] <davidhadas> How do I get back to a working state?
[13:49:00] <defk0n> Derick: you think i should denormalize that structure so i can index the collection itself ? isntead of the embedded document?
[13:49:20] <Derick> defk0n: can you link me to the docs again?
[13:49:54] <Derick> defk0n: those are not the whole documents, are they?
[13:50:02] <Derick> i mean, there are other fields in the docs
[13:56:14] <defk0n> Derick: technically, no, practically yes, the fields are based by model and responsibility, so tags is a Model, having a field meta alongside that would mean Meta is a model also. the document has diffrent models(fields) like Tags, Meta, Thumbnail, Article . example document http://pastebin.com/HP2xhTEX any feedback regarding index would be helpfull or just best strategy to index the tags etc.
[13:57:32] <davidhadas> Seems like any command I give - gets an error of ""not authorized on admin to execute command { authSchemaUpgrade: 1.0 }""
[14:05:36] <Derick> defk0n: can't really see another way
[14:08:58] <defk0n> Derick: what you mean by that exactly.
[14:54:57] <SomeAdmin> Hi Gents, im collecting info on mongoDB... Question: Is it safe to use btrfs? The posts ive read about mongoDB +btrfs are quite old... (And people wrote that btrfs is still experimental, and should be avoided for prod)
[14:55:49] <deathanchor> love when the dev says, "TTL Index isn't deleting old stuff" I'm like "look works for me. Show me." and they can't show me.
[14:56:11] <jamiel> Hi all, just wondering if anyone can provide clarity on totalDocsExamined in explain() as it doesn't seem to match my understanding where it should report the number of docs that could not be analysed directly in the index.... The documentation states "When an index covers a query, the explain result has an IXSCAN stage that is not a descendant of a FETCH
[14:56:12] <jamiel> stage, and in the executionStats, the totalDocsExamined is 0." ... however db.foo.find({_id: ObjectId("55be26451900f15070fbeb46") }).explain("executionStats"); say totalDocsExamined: 1 ?
[14:56:13] <StephenLynx> tell him it deletes everything every minute or so
[15:29:31] <davidhadas> cheeser: I was able to get thing back to working (after the upgrade) by returning to a standalone configuration. When I try to move back to a replicaSet, seems as if teh configuration is messed up - status: "Our replica set config is invalid or we are not a member of it"
[15:34:25] <davidhadas> Cheeser: Can one do mongodump from a standalone to a cluster configured with a replicaSet? If so - I can create a new DB and simply load it with the data - than start using it instead
[18:01:47] <RWOverdijk> One more question if you don't mind... I'm looking at: http://docs.mongodb.org/manual/reference/command/geoNear/
[18:02:15] <RWOverdijk> This also returns the distance, which is very useful. I'm currently running my own haversine after fetching with $near or $nearSphere
[18:02:37] <RWOverdijk> But my results don't match the ones returned by mongo (they're off somewhere around 1 km)
[18:02:54] <RWOverdijk> So I'd prefer mongo returning the distance; which is what geoNear does.
[18:03:46] <RWOverdijk> Now here's my question... I want to sort on a date field. I know it's possible to specify a query argument for geoNear to do filtering, and do a .sort() afterwards. But that sorts the response.
[18:04:12] <RWOverdijk> I want it to filter all documents within X, and filter those based on a date field.
[18:04:32] <RWOverdijk> second filter in that sentence should have been sort* sorry
[18:05:47] <RWOverdijk> "If you also include a sort() for the query, sort() re-orders the matching documents"
[18:06:04] <RWOverdijk> I'm not sure if this reorders all results and then applies the limit, or applies the sort on the limited result
[18:06:34] <RWOverdijk> @cheeser, Yes. That's what I have working right now (snippet coming) but it's not returning the distance.
[18:06:38] <tejasmanohar> http://mongoosejs.com/docs/2.7.x/docs/schematypes.html - i used to have a normal index on a field phoneNumber in my users collection and now i have unique index
[18:06:41] <tejasmanohar> these are both set via mongoose
[18:06:48] <tejasmanohar> >Note: if the index already exists on the db, it will not be replaced.
[18:07:09] <tejasmanohar> unfortunately, mongoose won't change the index already in my db, which i'd think is a normal index- not unique index... since i changed it to unique after.
[18:07:24] <tejasmanohar> can i do this directly in mongo? is there a nice tool for these kinda migrations and schema changes?
[18:09:06] <RWOverdijk> cheeser, The distanceMultiplier should be 6378. I changed it before pasting... So I know that bit is wrong in the gist, sorry.
[18:12:34] <tejasmanohar> i'd rather not re-create the collection for this change. i bet there must be a better way. i guess i can also drop and re-add the index via the shell but sounds less than optimal
[18:12:42] <tejasmanohar> is there a GUI or administration tool that ca ndo this?
[19:40:19] <deathanchor> jacksnipe: suck it all into memory :D
[19:41:18] <deathanchor> I need to write some code to handle no timeout cursor and to close the cursor if the program dies or gets killed.
[19:45:44] <cheeser> if the program dies, you can't really close the cursor.
[19:55:34] <deathanchor> well I meant die in the perl term die
[20:05:07] <bakhtiya> Hey guys - does anyone know what will happen if a remove overlaps with an update (upsert) - so while the remove is running, an upsert occurs - will that new upsert (even though it happened AFTER the remove) still exist?
[20:08:13] <deathanchor> only one write op at a time
[20:10:27] <bakhtiya> @deathanchor: only one operation per record per time you mean?
[20:11:16] <bakhtiya> @deathanchor: so in this case - if I have two async systems trying to modify the same record (identified by the same _id value) - only one of them will happen at a time?
[20:11:54] <deathanchor> actually it is one write per collection at a time I believe
[20:13:15] <bakhtiya> ah, so if I do a remove with some condition - is there a time gap between mongo finding that data, and then removing it? if there is, and say I hit mongo with an insert which happens right in that time gap - will that data be lost?
[20:13:48] <bakhtiya> given that data matches that condition I put in the remove()
[20:14:50] <bakhtiya> effectively in this case mongo is removing data that may or may not match the condition in the remove()
[20:19:17] <deathanchor> .remove(search) locks the collection .update(search, upsert :true) will wait until that is done before it does what it does
[20:19:24] <proteneer> i’m implementing a centralized spinlock of sorts using TTLs in Mongo, practically whats a reasonable frequency of polling? 100ms?
[20:26:41] <Doyle> Hey. How does the mongostat lockdb % report over 100%? I googled, but only found a jira that states its' a known sampling issue.
[20:38:41] <deathanchor> proteneer: you know it drops the expired stuff about every 60 seconds
[20:43:07] <deathanchor> bad for short-lived things like locs
[20:44:01] <deathanchor> funny they removed the 60 second part in the V2.6+ docs for that feature
[20:44:37] <deathanchor> in V2.6+ they say: The TTL feature relies on a background thread in mongod that reads the date-typed values in the index and removes expired documents from the collection.
[20:44:58] <deathanchor> it's zombie that eats expired brains
[20:45:50] <Doyle> If you add a member to a replica set that has 2 members, it'll start replicating off the slave, but say the slave has a short oplog, at the end of the replication, will it then attempt to catch up from the primary which has a longer oplog?
[21:06:06] <jpfarias> can I use the $text search for matching documents that has all of the words I wanna search?
[21:06:24] <jpfarias> basically I want to AND the terms instead of OR
[21:39:23] <dddh> deathanchor: what was in interview?
[22:06:44] <dddh> anybody cares about mongo certifications?
[22:06:59] <kexmex> how significant is this warning? ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
[23:06:29] <kexmex> looks like startup script set limits high for both, i changed it to be 0.5x, athough warning says" at least 0.5x"
[23:06:48] <Doyle> well, as far as perf goes, even if it doesn't die, it won't be able to complete tasks. Under continuous load it'll just queue up forever.
[23:07:02] <Doyle> Why wouldn't you just set the limits properly? Why is that even a question?
[23:07:26] <Doyle> You'll see errors in the log stating limit issues if they're encountered.