[07:52:59] <k_sze[work]> I'm trying to tear down a test sharded cluster. I figure the first things to stop are the mongos routers, right? But I can't find in the documentation how to stop mongos properly.
[08:24:02] <stefancrs> given that I have an array of embedded documents, like so: http://hastebin.com/qegotoqile.sm
[08:24:16] <stefancrs> is it possible to match documents that have a certain price for a given country?
[08:24:52] <stefancrs> or do I have to $unwind to do that? :)
[08:27:04] <stefancrs> what is this $elemMatch thing, hm
[08:32:24] <stefancrs> seems to do the trick, now if they only didn't use an ODM for the models in this project... :)
[09:32:11] <Mohyt> "When removing a shard, the balancer migrates all chunks from to other shards. After migrating all data and updating the meta data, you can safely remove the shard."
[09:32:45] <Mohyt> Is that mean, before shutting down any shard , first I have to igrate the chunks ?
[09:34:36] <Nomikos> Mohyt: I've no clue but this page describes the process http://docs.mongodb.org/manual/tutorial/remove-shards-from-cluster/ maybe that helps?
[09:35:33] <Nomikos> ah, you were looking at the intro
[10:36:53] <Mohyt> I want to deply a shard cluster
[11:21:12] <Derick> as you can use Unix domain sockets instead of TCP/IP
[11:22:22] <joannac> Derick: Surely you wouldn't want the 2 nodes in rs, you'd want them both to be standalone? Else you only have 1 shard and then, why shard at all?
[11:22:35] <Mohyt> Actually I just want to see how sharding works in mongo
[11:24:15] <Derick> it doesn't really matter in that case
[11:24:43] <Mohyt> I think It matter for this one https://jira.mongodb.org/browse/SERVER-9332
[11:24:57] <Mohyt> will help a broader view of things
[11:39:40] <Nomikos> Does MongoDB have any kind of cache you need to clear if you want to find out which of two (types of) query is faster?
[11:41:08] <joannac> Erm, on the contrary. I think you want to get all of your data into memory so you don't have to wait for it to be paged in.
[11:42:09] <Derick> only "cache" I can think of that might influence benchmarking otherwise is the "explain cache"
[11:42:18] <Derick> MongoDB remembers which query plan works best
[11:42:34] <Derick> you can just "fix" that by running a query more than one time and removing the first and last results usually.
[11:42:40] <Derick> and of course, what joannac says
[11:42:43] <Nomikos> I mean, in MySQL you have to add a flag that tells it not to cache the results (or use the cache). I'm wondering if it's faster for certain search to store the values I'm searching on in an array field, or in separate fields
[11:43:08] <Derick> MongoDB doesn't have a query cache
[11:43:14] <Nomikos> then I wanted to try it out, and was wondering if there was some thing I needed to clear in between searches
[11:52:55] <clarkk> does anyone here use mongoose/node? The #mongoosejs channel is dead. I hope people won't mind me posting my question here
[11:53:14] <clarkk> I get this error. Can anyone give me some suggestions, please? "500 TypeError: Object function model() { Model.apply(this, arguments); } has no method 'mapReduce'" http://pastebin.ca/2466901
[11:56:31] <joannac> Erm, that class doesn't have a mapReduce method
[12:00:05] <joannac> oops, sorry, not schemaname, collection_name
[12:05:59] <clarkk> joannac: it would be helpful if the docs actually told you how to define "User" in the example http://mongoosejs.com/docs/api.html#model_Model.mapReduce
[12:06:26] <clarkk> to me it looks like mapReduce is available on the model - Model.mapReduce(o, callback)
[12:11:12] <joannac> clarkk: Sure, and I agree, that's a bit deceptive. FIle a big report (or see if there is one already(
[12:14:56] <stefuNz> Hi. I'm using mongodb for some time now and i was using it for only a few queries per minute with a replicaset of 3 nodes. Since yesterday, i'm using it for a few thousand queries per minute and some nodes sometimes get stuck and don't accept new connections until i kill them and restart them. What am i doing wrong?
[12:15:11] <stefuNz> Machine Ubuntu 12.04, current mongodb version
[12:18:32] <clarkk> joannac: this seems like a much more concise interface.. http://stackoverflow.com/a/18836261 but again, I have no idea how db or foo are defined
[12:19:09] <Nomikos> I've a collection with a field categories which is an array, but searching using c.find({categories: {$type: 3}}) yields 0, as does $type 4. $type 2 (string) finds them all. Why is this?
[12:19:39] <Nomikos> the items in that field are indeed strings, but ..
[12:19:47] <Nomikos> they sit in an array type thing
[12:21:11] <joannac> stefuNz: check ulimits and mongod logs
[12:22:15] <stefuNz> joannac: oh, right… didn't check the logs .. oh man :D ulimits are set to 65000 in the init-script (ulimit -n 65000) -- isn't that enough?
[12:22:32] <joannac> clarkk: that looks like it's the mongo shell. which means db is whatever db you'reusing, and foo is the collection
[18:12:35] <dragenesis> I'm using the java driver. After I call dbCollection.insert or some other operation, is there ever a time when the WriteResult will be null?
[18:15:07] <cheeser> dragenesis: shouldn't be, no.
[18:17:51] <dragenesis> cheeser, is there a way it can happen? My code is just: WriteResult result = collection.insert(document)
[18:18:15] <dragenesis> with document being the DBObject and collection is the DBCollection.
[18:45:51] <testing22_> so i see the section on structuring documents for pre-aggregated reporting, but now i'm wondering how i would structure the the same thing to support multiple counters instead of just "hits" in the example. i'd like to have essentially several types of hits (or just generic counters) per timeframe. reference: http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/ any ideas how i would best go about this?
[18:50:51] <testing22_> the best way i could think of was to include an embedded object within each time bucket that contains the counters such as.. hourly: { "0" : { "foo" : x, "bar" : y } }. would this be feasible for aggregation? it seems so, but i'd like to make sure that is the best way
[18:52:56] <tkeith> I'm trying to do an update with a $setOnInsert, but it's failing. My code is: db.users.update({'username': user}, {'$set': {'token': token}, '$setOnInsert': {'username': user}}, upsert=True
[18:53:02] <tkeith> and the error is: OperationFailure: Invalid modifier specified $setOnInsert
[18:53:28] <Goopyo> tkeith: I think what you want is a upsert
[18:55:17] <tkeith> Goopyo: I'm setting upsert=True, but if it's inserting a new document I want it to set the username as well
[18:55:30] <Goopyo> it will, upserts insert the query params
[18:55:49] <Goopyo> basically if the document didnt exist it will have username and token fields
[18:55:54] <Goopyo> if it does the token field will update
[18:56:14] <Goopyo> thats what an upsert is, you dont need setoninsert anymore
[18:57:48] <tkeith> Goopyo: Ok, but what if I also want to add other fields only on insert, without changing for an already existing document? For example, I might want to set balance to 0 on insert, but leave it as-is if it already exists.
[18:59:34] <cheeser> dragenesis: i can't think of a case where it would be null, no. i'd think that'd be a bug if that were ever the case.
[19:32:59] <elux> hey just wondering but whats the difference between $orderby and sort() ?
[21:06:36] <Pinkamena_D> has anyone ever had any luck directly converting mongodb collectiong into tabular data?
[21:07:52] <Pinkamena_D> I am making a program and the old (access 2 db) program showed "reports" in the form of sql tables that could be printed
[21:08:24] <Pinkamena_D> I am wondering what the most user friendly way to display the bson would be without having to write a different function for each collection
[21:08:25] <cheeser> having once tried that very approach, i can tell you that documents don't often map/render well as tables.
[21:09:37] <Pinkamena_D> so you think that the best way would be to make custom functions, rather then trying to design some kind of "folding table"
[21:10:05] <Pinkamena_D> (where you can expand a column into more columns, etc. )
[21:10:27] <cheeser> i just rendered the documents as json and let the window scroll vertically as necessary
[21:11:15] <Pinkamena_D> I have to make it at least a little bit more user friendly then that...but thanks for the suggestion.
[21:13:51] <mst1228> hi. can someone help me dial in a query using the $nin operator?
[21:45:06] <tkeith> I'm trying to do the following pymongo query, and it's failing: db.users.update({'username': user}, {'$set': {'token': token, 'username': user}, '$setOnInsert': {'groups': []}}, upsert=True)
[21:45:13] <tkeith> The error message is: OperationFailure: Invalid modifier specified $setOnInsert
[21:47:23] <tkeith> Nevermind, just solved it -- looks like my mongodb version is too old
[22:00:15] <tkeith> It looks like setOnInsert can't set the _id... if this is the case, how can I atomically insert a document with a unique username and custom _id?
[22:19:44] <clarkk> is there a way to limit the number of documents retrieved for each match of a property value. For example, if the property is "pet_type" and it contains the value "cat", "dog" or "rabbit", is there any way to ensure that only 5 cat records, 5 dog records and 5 rabbit records are