[00:19:24] <Bioblaze> I wanna store my Date information as ISO 8601 format. Is there anyway to do that? XD @.@
[00:44:08] <happyken> Hi steffan can you help me with this question: have mongo db with multiple databases having common prefix. I want to have common account credentials for dbs having common prefix. Is this use case possible by any chance ?
[00:53:10] <steffan> happyken: There is not, afaik, any way to do this automatically. You could write something to generate / synchronize credentials, perhaps, or possibly use external authentication
[00:53:53] <happyken> steffan ok got it. thanks for the response.
[00:54:20] <steffan> (You will need the Enterprise version for external auth)
[00:55:12] <StephenLynx> are you talking about mongodb?
[01:10:24] <steffan> StephenLynx - either that or one of us is in the wrong channel
[01:19:54] <steffan> Well, there is this: https://www.mongodb.com/products/mongodb-enterprise-advanced
[01:20:14] <steffan> There are some features that only exist at this time in the commercial version. SSL used to be that way
[01:21:24] <steffan> Naturally, there is freedom for someone to independently develop, e.g., Kerberos support for the open-source version, but to date, I don't believe anyone has done so
[01:24:29] <StephenLynx> afaik, the SSL thing was an issue with cross compiling.
[01:29:24] <StephenLynx> is the only actual feature from all the stuff is listed.
[01:44:22] <steffan> Ops Manager / Kerberos & LDAP / SNMP support are not in the Community version
[01:44:42] <steffan> Yes, using an external Kerberos / LDAP / Active Directory server is what I was referring to by external auth
[10:50:35] <braqoon> Hi, I have a replicaset that i need to change ID of it. I know this is not possible but ID have to be changed anyway. Will removing all nodes from replicaset and re-adding them under new replicaset without removing any data in DB is possible ?
[11:23:51] <joannac> braqoon: do you mean changing the replica set name?
[15:22:20] <dhanasekaran> Hi Guys, I am new to mongodb, Currently I have some with mongo with wiredTiger storageEngine, It's every thing keep memory, but not Reclaiming memory to OS please guide me.
[15:50:33] <bdiu> I'm using the aggregation framework to sum 15 or so fields on ~ 2 million records... the performance here seems slow 30 seconds + per query. Anyone interested in providing some suggestions or advice for this type of reporting?
[15:51:38] <bdiu> I'm not sure if Mongo is the appropriate solution to our problem... I'm considering alternatives such as a Postgres (with or without the foreign data source connector) or elasticsearch
[16:00:55] <bdiu> @StephenLynx here's an example of one use: https://gist.github.com/bendalton/e7b5a7a781764cfc291b
[16:01:28] <bdiu> we have appropriate indexes on the match portions
[16:01:55] <StephenLynx> any particular reason you have multiple consecutive matches?
[16:03:38] <StephenLynx> why do you query for a particular user and then group?
[16:03:51] <StephenLynx> that could be done with a findOne
[16:03:52] <bdiu> a byproduct of the code generating the query... not intended. good eye... also the text of that query changed the date format to a string... so not, in itself, a valid query
[16:03:59] <StephenLynx> >code generating the query
[16:12:42] <StephenLynx> what I would do is to have a third pre-aggregated collection.
[16:13:30] <StephenLynx> it would add an additional update when you add a log, but since you got so many logs, it would be much more effective when reading.
[16:13:56] <StephenLynx> or you could just fetch this data from a single user at time based on user-demand.
[16:14:24] <bdiu> okay.. glad to know you've got the same instinct. the trick here is that the selection criteria for the logs is variable i.e., the $match portion here changes based on external, dynamic critera
[16:18:09] <bdiu> so, triplogs has properties mode and dateTime, we need to be able to basically say, I want user totals of triplogs between dates X and Y, where triplog.mode is $in:['mode1','mode2']...
[16:20:31] <bdiu> though not the full 2mm documents, frequently the resulting count from the match is 1 million + records...
[16:21:01] <dhanasekaran> StephenLynx: Have inserted data to MMAPv1 still holding memory, not releasing to OS, any idea please guide me
[16:21:17] <StephenLynx> tried rebooting the whole thing?
[16:21:50] <dhanasekaran> have tried separate mongo instance
[16:24:03] <StephenLynx> no idea then what might be.
[16:24:19] <StephenLynx> it shouldn't be by no means using more RAM than enough to storage the whole dataset.
[16:24:25] <StephenLynx> I might double check everything.
[16:24:46] <bdiu> StephenLynx: if you were faced with the requirement of looking at this data, dynamically, for each user on millions of records without a reasonable way to pre-aggregate, would you have an alternative technology choice?
[16:25:04] <ViolentR> hi does anyone worked with map_reduce function ?
[16:25:26] <bdiu> ViolentR: I'm sure many people have here, what's your question?
[16:31:04] <bdiu> the date range, and selection criteria for the triplogs change frequently... it's easy to pre-aggregate some of the common cases, but not all by any stretch. We already have pre-aggregation in place, but can't use it for most of the queries needed for our reporting purposes.. there are a dozen potential values for the modes property and we need to support queries against every combination. so short of pre-aggregating
[16:31:04] <bdiu> every combination (12 factorial, right?) I'm kind of stuck with dynamically aggregating these individual documents
[16:37:58] <StephenLynx> the whole operation took too long, so if one could request enough times per second
[16:38:07] <StephenLynx> the server would crash because of RAM usage
[16:38:26] <StephenLynx> my solution? remove log search, make static HTML pages one for each day.
[16:38:37] <bdiu> yeah, we pre-aggregate per user on day, month, year and choose the appropriate granularity based on the search, combining them together as best we can... e.g., searching from 1/1/2014-3/13/2015, we'd select the values for all of 2014 + the month values for jan, feb, and finally the 13 day values for march
[16:39:13] <bdiu> it works well where we can use the pre-aggregates but where we cant, we have to resort to scanning the collection, and grouping by user
[16:39:34] <StephenLynx> sometimes you have to make compromises, IMO
[16:40:45] <bdiu> back to my former question though... if you HAD to provide the full selectability and 30+sec queries were not reasonable, is there an alternative technology you might turn to?
[16:40:58] <bdiu> StephenLynx: thanks for your help, it's greatly appreciated
[19:42:06] <cheeser> is this an array of documents?
[20:01:56] <chrisOlsen> Hi everyone, is it possible to update a document with $set where the value is a further subdocument? e.g. { $set: {"My.Object.Chain": {"SubKey": "SubValue"}}}
[20:03:43] <chrisOlsen> Alright, thanks. I was hoping to not have to iterate over the subkeys
[20:49:39] <jr3> cheeser: I have a document that has an array of integers, I want to sort this array then remove any elements in the array after index 10
[20:54:53] <cheeser> you can maybe $unwind in an aggregation then limit 10
[21:00:40] <shlant> does secondary for replica set readPreference mean that even if the secondary is down, it won't read from Primary?
[21:00:53] <shlant> or what is the diff between secondary and secondaryPreferred
[21:02:36] <shlant> it says "In most situations, operations read from http://docs.mongodb.org/manual/reference/glossary/#term-secondary members but if no http://docs.mongodb.org/manual/reference/glossary/#term-secondary members are available, operations read from the http://docs.mongodb.org/manual/reference/glossary/#term-primary."
[23:05:07] <fxmulder> so I have 2 sharded replica sets going, I added the second replica set earlier this year and it migrated data to from the first replica set to the second replica set. The first replica set seemed to retain the original records it migrated so we are doing a cleanorphan on it to clean it out.
[23:05:49] <fxmulder> I would expect that the empty space left by the removed orphans would be reused by new entries into the collection but disk space keeps climbing