[03:46:08] <sheki> has anyone used mongos as a proxy to just to connection pool
[03:46:29] <sheki> does mongos open one outgoing connection per 1 incoming connection?
[04:11:42] <QbY> Can anyone point me to an example of using Java and Mongo, specifically storing an org.w3.Document?
[06:16:21] <quattro_> i enabled auth on my replicaset and added a first user thru the localhost bypass i didn't give the user "userAdminAnyDatabase" role, can i grant this anyway now?
[07:58:42] <jiffe98> so I'm rebuilding indexes and it is working on stage 2/3, completes it and starts back over on 1/3
[08:42:00] <efazati> how can create index for this query? http://codepad.org/sPyfBGDt
[11:21:17] <therion> Is it possible to aggregate on a working subset without reducing the pipeline working set ? For example, I want to get the $max aggregate value grouped by an ID. I want to store that value in a variable (without reducing the set) and then perform a $match using that ID.
[13:53:25] <efazati> how can create index for this query? http://codepad.org/sPyfBGDt
[14:26:00] <smolinari> Hi. I have a database design question. Although Mongo says, it doesn't bind you to schema, there is often mention about still keeping data normalized to keep up database performance. No normalization = poor performance. For instance, increasing document size by adding fields later in the game. I've read, this causes the system to become defragmented and thus, slower.
[14:27:01] <smolinari> A different problem with large tables in MySql is alter table, kills database performance, when a new column/ field is added. Now we are looking at a similar issue with Mongo and increasing document sizes.
[14:27:14] <smolinari> Sorry, a similar problem....
[14:27:32] <smolinari> So I guess the question is, how can we give our customers the flexibility of creating new fields at will, without killing database performance?
[14:29:10] <smolinari> Or if we decide more fields are necessary in a certain collections (a change in schema), what is the process to make sure Mongo doesn't get affected negatively?
[14:35:55] <rickibalboa> smolinari, we had a similar problem and took an approach like this: https://ghostbin.com/paste/y9v2h
[14:36:08] <rickibalboa> if that makes any sense, hope it helps you
[14:39:17] <smolinari> Great thanks for that. But doesn't adding the dynamic section also increase the original collection's document size? I've also read about pre-allocating. Not sure that is a great way to go, as you'd also be taking up space for data you might never use.
[14:41:01] <rickibalboa> smolinari, the dynamic section is completely removed from the original document, any of the dynamic items are in a seperate collection and are fetched with dynamiccollection.find({parentId: 'original-document-id'}); then looped through there
[14:41:06] <smolinari> But it would be easy to preallocate a set number of additional fields and just set that as an overall limit.
[14:41:58] <rickibalboa> But it's only good if you've got a dynamic object or array thats going to be changing, not really good for normal fields, or when you're increasing the size of the original document
[14:42:44] <rickibalboa> For example I have an app that stores users in an irc channel that way, it originally grabbed the document, modified it, but now the users are in a seperate collection. You can imagine it got fairly inefficient for large freenode channels.
[14:46:53] <smolinari> Yes. A user collection is a good example. Let's say a user collection has some basic fields. These will never change. But a customer uses that same collection data model for their user collection, but also wants to add birthdate, hobbies and favorite films and even later, they remember they want to have the sex of each user. Each added field creates bigger documents.
[14:47:31] <smolinari> Whereas, all of this information could easily be kept in a document.
[14:48:42] <rickibalboa> Just depends how big the document can get, in my scenario it could have went from 1 to 1000 properties. For that example I probably wouldn't seperate the individual fields into a new collection, it would probably be safe to store that all in a document
[14:51:08] <smolinari> Ok. But when is it safe and when isn't it safe. I haven't found any information on making that kind of decision and I've read a book and searched the Internet for an answer. It is a fundamentally important decision for our system design (at least I think it is, with my very limited knowledge of mongo.)
[14:52:28] <rickibalboa> You could probably safely assume, say a social networking site, that settings like that don't often change, gender, birthdate, even hobbies and favourite films. If you we're storing an object containing everything they've commented on, though, that for me would be a good use case.
[14:53:50] <smolinari> Ok. yes. That would be an obvious use case for creating a new collection for the data.
[14:54:14] <efazati> what is problem of my query ? > db.books.find({ $query: { "$or" : [ { "items.producer.id" : ObjectId("52283b214c283a571303e5f6") }, { "items.dio" : { "$in" : [ "574.076" ] } } ] }, "$orderby" : { "circulation" : -1 }}).explain()
[14:54:15] <efazati> Sat Dec 28 18:21:41.355 error: { "$err" : "invalid operator: $or", "code" : 10068 } at src/mongo/shell/query.js:128
[14:55:08] <smolinari> I am just worried about the increase document size issue and database performance.
[15:09:48] <smolinari> Ok, found some more information. http://docs.mongodb.org/manual/core/record-padding/#write-operations-padding-factor
[15:12:32] <smolinari> Quite interesting stuff. A general probably totally noob question.
[15:14:02] <smolinari> But if we do add a field to a collection, it would only be added to newer documents. If an older document is read, the field would be missing or come back as null., right? And if data is updated, that is when possible issues with document size might arise.
[15:23:01] <DH4> hey im having a little issue with using --auth on mongo db
[15:23:16] <DH4> if i specify users per database, with the correct roles, everything works fine
[15:23:34] <DH4> but i can't seem to add a user that has permisions to modify every database
[20:26:30] <AWAW> kali: ooh, maybe that would work
[22:10:16] <RoryHughes> When writing an api with mongodb+mongoose and needing to only send back for example, 10 out of the 20 fields in a document, would it make more sense to select the 10 fields with the query or instead just have a schema method which picks out the 10 before sending a response
[23:27:24] <OliverJAsh> when i do `var ObjectID = require('mongodb').ObjectID; new ObjectID()`, the output in my console appears to be a string, when in fact the ObjectID is an object. how does this object change the appearance of it's value when logged to the console?
[23:50:59] <arvidkahl> which browser is that? some call toString() on objects when logging
[23:53:16] <OliverJAsh> arvidkahl: i think it might be something BSON is doing
[23:54:47] <arvidkahl> http://stackoverflow.com/questions/18347719/nodejs-using-mongodb-native-driver-how-do-i-convert-objectid-to-string second reply replicates your issue
[23:55:33] <arvidkahl> that person had the problem of NOT getting a string it seems :P