[02:37:45] <joannac> MLM: np. the docs say "you can pick where to isntall. We assume you installed to c:\mongodb". Not the saem as "this is the default" :p
[02:38:06] <joannac> bros: what's your app written in? what connector is it using?
[02:38:10] <MLM> I am curious what the default data directory is as well
[02:38:15] <joannac> bros: are you sure it's exactly the same
[10:30:17] <h4rdik> I'm new to mongodb. I have a database dump created from a remote server which I would like to restore into my local pc. I have tried mongorestore with no arguments. A bunch of collections got imported but the process aborted with 'bad index key pattern' error.
[10:30:39] <h4rdik> The dump is a folder full of json and bson files
[10:31:08] <h4rdik> How can I import all the collections directly without much hassle?
[10:43:51] <KettleCooked> How much disk space does a mongodb document take? If it contains something in the lines of 1.000 bytes of text and a UUID? Trying to calculate future disk usage with many many small documents.
[10:58:42] <auzty> we must set the role manually right? if user only get readwrite db, it will not able to create db / user right?
[10:59:31] <auzty> i just hesitate, if some user only get readWrite role, can that user create a user / new db ._.
[11:27:55] <arussel> using mongo 2.6, in aggregate, is there a way to cast a string to int to be able to use arithmetic function on it later on ?
[12:01:56] <brano543> Hello, can anyone tell me the difference between document store and graph-oriented storage? I just don´t get it why people think MongoDB is not suitable for graph traversal. The problem i am facing is, if you can get data into memory, you just create some kind of map where you make evidence of all neighbours of current node and everything is fine. But when you can´t fit the data into memory, you just need a database,you have tre
[12:02:20] <brano543> node you are looking for, i thik you also need to do that in neo4j or am i wrong?
[12:07:25] <brano543> i looked how one guy did this using traditional database http://www.dupuis.me/node/27 but thats more complicated in my opinion,because there are a lot of joins so it needs to traverse a lot of data every time to get the corect neighbors.
[12:09:52] <brano543> What i am trying to say,he also preprocesses the data and finally he knows for every node that he can go left or right,but anyways he has to find the id,because he doesn´t keep the whole relationships in memory.
[12:11:20] <brano543> Is here anyone to back me up that it is a good way to chose MongoDB for creating edge-expanded graph?
[12:15:08] <StephenLynx> afaik, mongo can query by using geo location.
[12:15:16] <StephenLynx> have you looked into that?
[12:18:47] <pamp> how can i disagregate an array in a collection??
[12:19:00] <brano543> StephenLynx: yes,i did, i was just asking if its correct way to solve it like this {"node_id" : something, neighbours: ["id1":cost1, "id2":"cost2"]} and create index on neighbours.id
[12:19:49] <StephenLynx> Don't know, I never used geolocation nor implemented something like that.
[12:20:34] <StephenLynx> it is an aggregation stage.
[12:21:18] <pamp> thanks Stephen.. can i copy the result into another collection?
[12:21:22] <brano543> StephenLynx: Forget about geolocatino for a while. I asked you if you can imagine to use MongoDB for representing graph with edges and its traversal
[12:21:36] <pamp> i nedd to re-model the collection
[12:21:41] <StephenLynx> never used an edge graph either :v
[12:21:53] <StephenLynx> pamp sure, but you will need a second query for that.
[12:30:17] <brano543> StephenLynx: no,you will need so many queries to traverse along the route until you fight the final destination. Other way may be to create the graph in memory,find the path and information about path get from mongodb.
[12:32:03] <brano543> StephenLynx: it will be slow if i will query mongodb so much,wont it?
[12:32:39] <StephenLynx> Don't know if it will be slower than a SQL query with a lot of joins.
[12:33:08] <StephenLynx> I don't have enough experience with none of the stuff you are dealing to be able to guess that.
[12:34:29] <brano543> StephenLynx: Hmm, i don´t know how will i solve it, i realized it just today that isn´t the best idea to traverse the graph using hundred of queries on mongodb. I guess i will need to use soemthing else for this, maybe redis or something like that.
[12:35:16] <StephenLynx> yeah, probably there is something made for situations like that.
[15:49:31] <GothAlice> I'll have to slap a migration on this thing at some point to pivot: source: [{_id: ObjectId(…), …}, …]
[15:49:47] <GothAlice> Oh, nearly two year old code, you give me nostalgia. And make me want to slap myself.
[15:52:26] <StephenLynx> from what I searched you can't use mongo itself to do that, you would have to do that in the application code.
[15:52:54] <GothAlice> Yeah, for now I'll be doing it application-side. (Of all of the queries needed for my "dashboard", this one is the most painful.)
[20:23:31] <StephenLynx> actually scrap that too because you can only query for specific sizes and not ranges :v
[20:24:06] <StephenLynx> To select documents based on fields with different numbers of elements, create a counter field that you increment when you add elements to a field. mordonez from official docs.
[20:27:05] <mordonez> What I want is to match a row that has a record in days like this { "year" : 2015, "day" : 5, "time" : 0 }
[20:27:15] <mordonez> if it has { "year" : 2015, "day" : 4, "time" : 0 } for example
[20:32:29] <StephenLynx> you want to query for array elements?
[20:33:30] <mordonez> Let me explain you a little bit more
[20:33:55] <mordonez> the record in pastebing contains in days objects that represents a week
[20:34:10] <mordonez> in that case I have the end of 2014 and start of 2015
[20:34:22] <mordonez> the next week record will contain just 2015 records
[20:34:29] <mordonez> the thing is I want to filter by week
[20:34:46] <mordonez> but the query always returns that record that has both years 2014 and 2015
[20:35:33] <StephenLynx> http://stackoverflow.com/questions/8835757/return-query-based-on-date you can query for dates. but then you would have to record the date properly.
[20:35:35] <mordonez> I suppose filtering with and would be enough
[20:35:53] <mordonez> that's why I put the and so both must match
[20:36:41] <mordonez> I want to filter something like "give me all records that contains in days a record that has a day >= 5 and day <= 11 and year = 2015"
[20:36:58] <mordonez> if you see the record, none of the days record match
[20:37:08] <mordonez> but I don't know how to represent that in query
[20:43:51] <mordonez> and it returns a record that do not match
[21:04:26] <hahuang61> seeing a ton of context switches and interrupts on our cluster right now, but no slow queries. anything besides the yielding that might cause this?
[22:12:45] <jrbaldwin> anyone no why this aggregate $geonear $text query isn't working http://stackoverflow.com/questions/28684188/mongo-aggregate-geonear-and-text-no-results
[22:18:32] <Boomtime> jrbaldwin: $text requires a text index - and only one index can be used for an aggregation
[22:20:05] <jrbaldwin> Boomtime: i have a text index on $text and a 2dsphere index for $geonear - is there a best practice for this type of query ?
[22:21:37] <Boomtime> only one index can be used for an aggregation
[22:24:34] <Boomtime> jrbaldwin: sorry, but i don't think you can combine $text with geo queries, you can try your $match stages the other way around but i don't think it will work
[22:26:52] <jrbaldwin> Boomtime: thanks! have you heard of an alternative way to reduce the results, or can think of another to do approach it?
[22:27:53] <Boomtime> keywords/tags instead of $text
[22:30:48] <Nepoxx> My TTLs are never expiring, indexes are set correctly though, this is strange...