[11:23:58] <SvenDowideit> I have some stored js that I'm trying to debug
[11:24:17] <SvenDowideit> and, that js is normally called from a $where clause in a query
[11:25:02] <SvenDowideit> I have added some print() statements to it, and when I call the stored js from the mongo shell, the code does what i expect, and prints the expected states into the mongodb.log file
[11:25:35] <SvenDowideit> but when its called from a query called by the perl app, nothing is printed to the log file, and the results are not as i expect
[14:14:22] <fotoflo> ron: i believe so… but maybe i just killed the client
[14:15:43] <fotoflo> yes, i just killed the client
[14:15:50] <fotoflo> thanks ron… now to find the default install directory
[14:21:12] <fotoflo> ok working!! now one more question: i have a collection with an array of tags in the schema, how do i use aggregate to get the list of unique tags?
[14:25:52] <fotoflo> close, but still have overlapping
[15:29:26] <timeturner> what is the most appropriate way to model a one to many relationship in mongodb?
[15:30:01] <timeturner> The only ways that I know if that all of the "many" documents have a field that references the "one" document via object id or something
[15:30:19] <timeturner> or the "one" document has a array of objectids that reference the "many objects"
[15:30:36] <timeturner> the problem with option 2 is that it isn't infinite in scale
[15:30:58] <timeturner> I don't know after how many objectIds will the max size of the document be reached
[15:37:32] <ron> timeturner: well, it's a bit more complicated than that.
[15:37:57] <ron> you should model your data based on your queries and not what 'feels' right.
[15:38:37] <ron> also, the document size limitation isn't really that big of a problem, and if you really fear of hitting the limits, there are various ways to deal with it.
[16:08:09] <timeturner> ron: so should I put the many objects' objectids in the one document as an array and if it exceeds that then adjust the system to become more decentralized to where each of the many docs reference the one doc?
[16:08:29] <timeturner> the many and the one are in different collections
[16:20:18] <ron> timeturner: I'd love to go into it with you, but I gotta run. maybe we could talk about it in a few hours.
[16:21:46] <Vile> timeturner: if there going to be huge number of documents and you are going to use small subset of them tied to the main document => use first type of connection
[16:22:28] <Vile> if you would need to have access to all of them (and don't expect many) at any time => store them as array
[16:47:22] <Vile> timeturner: could be good option if that would be often used option
[16:48:46] <timeturner> but the problem is that either I have to set a limit on the number of characters per document or I have to guess a hard limit so that those 10 docs don't exceed the main doc's size lmit
[16:49:37] <mrpro> when are you guys going to start adding new features instead of fixing same bugs/regressions over and over
[16:49:52] <timeturner> I was also thinking about calculating the difference between how much a particular document will take in the one doc via BSONSize and then check how much space is left in the one doc
[16:51:41] <Vile> timeturner: just add references as ID's, not the whole documents
[16:52:55] <Vile> this way you will not have to care of what to do with sub-doc that goes out of the limit
[16:55:08] <Vile> depends on your usage scenario, though. But usually you would not want to have the whole sub-doc, only the part of it that makes sense and allows to easily get it from the main storage collection
[19:57:24] <noordung> Anyone here a user of Mongoose?
[19:57:49] <wereHamster> noordung: ask your actual question.
[19:58:26] <noordung> Right, I'm having some weird trouble. I open a connection, I try to save a document, and nothing happens... Never had this before...
[21:03:46] <taf2> when upsetting on a sharded cluster… if my shard key is aid and _id
[21:04:03] <taf2> is it enough to upsert on aid and another key? or do i need both aid and _id?
[21:10:02] <driscoll> quick q about interpreting mongostat output. not sure what the "locked db" is measuring exactly.
[21:10:55] <driscoll> i'm currently importing some very large JSON files... and locked db is reporting "collection:900%" and higher. this seems odd but my inserts are still around 4k/s
[21:11:22] <driscoll> any insight (or suggested reading) much appreciated!
[21:21:31] <noordung> Anyhow, my Mongoose problem seems to be in calling mongoose.model vs connection.model... :/
[21:21:33] <chovy> how do i tell what version of mongodb i have?
[21:26:29] <noordung> Right, so note this: mongoose.model(modelName, modelSchema); only stores it in the mongoose implementation, you need to call connection.model(modelName) to get the model constructor...
[21:36:54] <chovy> that's the one problem with debian. everything is outdated.
[22:36:38] <driscoll> for the sake of the log: here's a bug report regarding the "locked db" question i posted earlier: https://jira.mongodb.org/browse/SERVER-6507
[22:57:41] <voldial> Anyone familiar with #ask in semantic-mediawiki? Basically you can embed typed var's in mediawiki pages, and also embed queries (like return all pages that have a GPS coordinate, or sum some var across a subset of pages) that generate a view of the results (table, number, graph, map whatever). It's cool, but it's this big PHP app... and the functions are somewhat limited... I want to roll my own replacement (not in PHP, not tied to media
[22:57:42] <voldial> wiki) and expose a lower level of access to the data... my first thought was to just let users enter SQL directly, but that cant be made safe... so I am looking at mongo's query language... is there some general solution that lets users submit mongo queries in a safe (read-only) way?
[23:07:16] <voldial> as far as I can tell, mogodb queries are read-only... but I suspect a user could still dos the system
[23:09:01] <voldial> found http://docs.mongodb.org/manual/faq/developers/#how-does-mongodb-address-sql-or-query-injection
[23:31:59] <timeturner> how do you do full text search in mongo?
[23:32:08] <timeturner> does it work by querying the index with every word
[23:32:24] <timeturner> or do I query for all the docs in the index
[23:32:50] <timeturner> and then just use client side js to display a sub amount of that?
[23:37:03] <vsmatck> timeturner: Search mongodb.org for "full text search" and click first result.