[00:02:00] <Freman> probably a document with lots of named arrays in it, php hates those
[00:02:31] <Freman> my apologies for blaming mongo
[00:03:20] <Boomtime> no worries, i guess you can figure out ways to combat this now
[00:03:21] <Freman> in the real world this will never be run, it's just a proof of concept to convince the bosses to let us stop sending meaningless data to mongo that would be better stored in influx or something to get pretty graphs
[00:03:42] <Freman> plan being to reduce the size of these documents and the quantity of them
[00:09:54] <Freman> for reference it stops dying at 512 meg
[00:11:14] <Boomtime> well.. at least you know what it takes
[04:38:40] <grandy> confused about the syntax of using aggregation to count users based on a sub-document's attribute...
[04:55:03] <Boomtime> hi grandy, that description sounds like it could be done with a query, but aggregation would be fine too - can you pastebin example document and what you want to count?
[04:55:20] <Boomtime> also, let us know what you've tried
[05:03:14] <grandy> Boomtime: thanks much, just figured it out, I think the idea of the structure of method: {... args ... } was a bit counter-intuitive when looking at the stuff i typed in
[06:45:04] <angular_mike_> I'm struggling with finding data on length and content constraints for different data types (string, date, document, etc.) in the documentation. Anyone can help?
[09:17:09] <pamp> is there any tests to compare .net driver vs java driver performance?
[10:45:37] <iszak> Can I create a databaes after setting up a replica set?
[11:27:30] <_Rarity> Hello. Can someone help me solve a problem when connecting to mongodb remotely?
[11:29:11] <_Rarity> When on host machine, I can easily "$ mongo localhost:27017 -u USER -p PASS". But when trying to login remotelly, I get the error " 18 { ok: 0.0, errmsg: "auth failed", code: 18 }"
[11:30:11] <_Rarity> I have made sure that I specifically use the "admin" database for login. I have also made sure that the port is open and that the server responds on it
[12:00:38] <adsf> if i have a mongocursor in java that i ahve made changes to, how do i go about updating the records?
[12:23:19] <adsf> i think i may figured it out, happy days
[13:00:45] <thikonom> hi everyone, I am currently planning to upgrade to mongo 3.0 . Does anybody know if WiredTiger needs extra space at the time of the restoring phase ?
[13:20:17] <fxmulder> how much memory does mongodb use when initializing a new replica member? I have 32G of ram and 64G of swap in this thing and it died from being out of memory
[13:47:42] <adsf> cheeser: would would be amazing would be a json2model for codecs :p
[14:28:38] <d-snp> Axy: that step "replace the binaries" was exactly what I meant by that installing mongodb 3.0 will automatically remove the mongodb 2.6
[14:44:52] <GothAlice> scottbessler: Alas, none of my own clusters noticed the rather unusual situation last night. (Also so happy that today is a holiday, in case there were issues.)
[14:55:57] <pamp> Is there any benchmark comparing .net and java driver performance?
[15:00:52] <grandy> hmm, trying to figure out how to sum the price and group by item.name .. any advice? document looks like: {name: 'henry', items: [{name: 'a', price: 1}, {name: 'b', price: 3}] } ...
[15:02:40] <grandy> hmm, trying to figure out how to sum the price and group by item.name .. any advice? document looks like: {name: 'henry', items: [{name: 'a', price: 1}, {name: 'b', price: 3}] } ...
[15:07:35] <adsf> has the logger class changed for java in mongo 3?
[15:07:47] <adsf> having trouble setting a level to not show a lot of the chatter
[15:33:46] <grandy> ^^ wondering if someone could help me understand that query
[16:55:49] <schmichael> is there a way to log all queries hitting a server w/2.6.7? i try using setProfilingLevel(2,0), but it still only seems to log queries that take >=1ms
[16:56:24] <GothAlice> schmichael: If your server is configured for operation in a replica set (i.e. has an oplog) you should be able to tail that to watch all activity.
[16:56:52] <honigkuchen> can mongodb also be used as an triple store database and if yes, does it have disadvantages to normal triple store databases?
[16:57:03] <schmichael> GothAlice: oplogs include queries? i assumed they only included mutations
[16:57:29] <GothAlice> honigkuchen: MongoDB can, technically, be used as a method of storing n-tuiples. It's not optimized for this use, however.
[16:58:07] <GothAlice> honigkuchen: No effort was spent to optimize that use case. (None.)
[16:58:26] <honigkuchen> speed or practical programming?
[16:58:45] <GothAlice> Just that there are multiple ways one could implement that, each with specific trade-offs unique to MongoDB.
[16:59:25] <GothAlice> {_id: ObjectId(…), tuple: [1, 2, 3]} vs. {_id: …, tuple: {foo: 1, bar: 2, baz: 3]}, etc.
[17:00:05] <GothAlice> One could simply treat a document as a "named tuple".
[17:01:16] <GothAlice> The difficulty comes down to exactly how you want to use those values. Set operations? (I.e. intersections between records?)
[17:01:40] <honigkuchen> is mongodb ok for ontologies?
[17:02:03] <honigkuchen> because I have heared that ontologies need triple store databases
[17:02:28] <fxmulder> is there a recommended amount of memory for initial replication with a replica set member?
[17:03:25] <GothAlice> honigkuchen: It is until they become extremely hierarchical (no recursive queries in MongoDB, so careful data design is needed) or if the structure is better modelled as a graph, for which a real graph database like Neo4j _will_ do a better job. A triplestore is just a form of entity-attribute-value, that is, a hack to work around the limitations of SQL. Limitations that don't apply to MongoDB.
[17:04:12] <GothAlice> Why have a series of records each describing one attribute of an object instead of having a single object with multiple attributes? (Which MongoDB does natively. That's the entire point, actually. ;)
[17:05:09] <honigkuchen> GothAlice: this was very good information
[17:05:38] <fxmulder> or is there a way of calculating memory requirements?
[17:06:15] <GothAlice> fxmulder: MongoDB uses memory mapped files, so the "optimal" available memory is the same as the on-disk size, + room for "working memory" to answer queries and manage connections.
[17:06:34] <GothAlice> Working memory size will fluctuate with connection count and activity.
[17:06:56] <fxmulder> well I've been trying to get a new replica set member going and mongodb keeps dying due to OOM
[17:10:06] <GothAlice> Then I'd open a JIRA ticket to get some upstream assistance diagnosing that. An OOM event with that much hardware should simply not happen.
[17:12:14] <GothAlice> fxmulder: I have to ask, 'cause it's a possibility: are you running 32-bit mongod?
[17:12:35] <GothAlice> That would certainly run into memory issues.
[17:13:36] <GothAlice> 64-bit machine, 64-bit kernel, 64-bit userland, and 64-bit service. Each of those can drop down into 32-bit on most architectures. ;P
[17:13:46] <GothAlice> I.e. running a 32-bit application on a multilib 64-bit host.
[17:14:11] <fxmulder> ii mongodb-org 3.0.4 amd64 MongoDB open source document-oriented database system (metapackage)
[17:17:14] <GothAlice> Definitely open a ticket, then. If you have commercial support, make sure you submit the ticket that way to get the attention it deserves. :)
[17:17:31] <fxmulder> hmm, I had plenty of swap space free when this died
[17:17:47] <fxmulder> Total swap = 64802808kB, Free swap = 64125816kB
[17:18:26] <fxmulder> I will open a ticket though, thanks
[18:18:46] <Havalx> when indexing documents; can I index by key name and/or value name?
[18:20:38] <cheeser> indexes are created based on the values of the fields...
[18:33:05] <cittatva> hello! can anyone help me mount some EBS snapshots from mongolab backup on an ec2 machine so I can verify the backup? I've gotten as far as creating volumes and mounting them, but mdadm doesn't see and recogniseable superblocks
[18:53:13] <cittatva> I figured it out - needed to use lvm2 instead of mdadm
[19:01:01] <ericmj> according to this: https://github.com/mongodb/specifications/blob/master/source/crud/crud.rst#update-vs-replace-validation drivers should validate update/replace documents
[19:01:16] <ericmj> but when testing the node driver it doesn't seem to do any validation
[19:22:52] <adsf> so i have a query, which i take the return and make a List<My_model>, then i do some work on the list, then i want to bulkwrite by passing a List<WriteModel<My_Model>> but im getting duplicate key errors (because of an index)
[19:23:01] <adsf> doesnt bulkwrite do kind of an upsert