[00:48:05] <joannac> "cursor" : "BtreeCursor a_1 multi" <--- that's using an index
[02:05:45] <IVBaker> Hello, I'm running a replica sharded cluster. My problem is for the mongos instance, how can I change the default directory for the journal? I tried --dbpath but this option is not recognized for mongos
[02:07:30] <kelseyhightower> Can any one help with killing cursor without timeouts
[02:08:13] <kelseyhightower> Bunch of bad clients opened tons of cursors and did not clean up or close cursor
[02:11:31] <harttho> Got a sharding/indexing question: I have an object with 3 fields (A B and C) and the collection is sharded on A and B, and indexed on C. I want to query for all C's where C=c
[02:12:05] <harttho> If I go query on just C, I don't get to use the shard key. if I query on A and B, I don't get to use the C index. What happens if i query on A, B, and C
[02:13:01] <harttho> TLDR: Can I use query on an index that isn't a shard key, but still benefit from querying with the data in the shard key
[03:17:11] <IVBaker> Hello, I'm running a replica sharded cluster. My problem is for the mongos instance, how can I change the default directory for the journal? I tried --dbpath but this option is not recognized for mongos
[03:18:17] <IVBaker> my problem is that I'm really limited in the /var/lib/mongodb folder
[03:20:07] <joannac> Umm, I didn't think mongos kept a journal
[03:37:21] <a|3x> i seem to have issues using eval() in auth = true mode, http://docs.mongodb.org/manual/reference/user-privileges/#auth-role-combined says some stuff about this but it seems no matter what i give, i always get the unauthorized error, would someone please shine some line on the matter?
[03:40:21] <joannac> what privileges does your user have in the admin database?
[04:38:54] <a|3x> joannac, none, i only added the user to this one particular database
[05:53:10] <joannac> a|3x: that's why it doesn't work
[05:53:46] <joannac> check the docs "... on the admin database"
[05:54:38] <a|3x> i believe i tried it on that one as well without any luck, but i'll try again to double check
[06:14:32] <joannac> a|3x: use admin; db.system.users.find()
[06:14:47] <joannac> make sure it has all 4 of those permissions
[07:22:52] <Scud_> hey guys im trying to get a baseline benchmark for my setup using benchRun and was wondering two things. 1) can i dispatch concurrent operations without using threads in benchrun and 2) if so, can i limit the number of concurrent operation to a fixed amount. thanks in advance
[08:13:48] <Mech0z> When using the c# driver, after I have used the classmap, is it possible to delete a object in a collection by id without knowing the id fields name?
[08:14:21] <Mech0z> I am using an abstract generic class to delete with, so I dont know the type
[10:28:03] <agend> what is the best way - to make something like bulk upsert? i have plenty of updates with $inc to documents which may exist or not - and so far I have to make one by one upsert of them. Was thinking about using mongoimport - but there is no way to use $inc. Is it possible to make some js scipt which would be executed by mongo and make it fast?
[10:37:38] <_boot> are there any known issues with removing shards on 2.5.4?
[10:56:11] <mefyl> short question: using an aggregation with $group, I can't figure out how to access documents themselves and not one of their field with '$field'
[10:56:44] <mefyl> if I want to group transactions per day for instance: transactions.aggregate({'_id': '$date', 'transactions': {'$push': <the transaction itself>}})
[10:57:35] <mefyl> only way I found to do it is to re-list all transaction fields {'name': '$name', 'user': '$user', ...} which kinda sucks
[11:32:34] <_boot> mefyl, I think I was trying something similar a while back, and I needed to wait for a 2.6 feature where you can push $$ROOT or something as the whole document
[11:54:51] <hierbabuena> _boot: They are pretty lighweigth. Use the cheaper you can get.
[11:55:40] <_boot> any idea of their memory requirements? can't find much on this on Google other than one person saying they had an issue with 1GB of RAM
[11:57:43] <Nodex> Derick : you have a typo in one of your blogs http://derickrethans.nl/mongodb-arbitrary-key-names.html
[11:58:26] <hierbabuena> _boot: Memory requirements would depend on your dataset, but take a look at what is stored: http://docs.mongodb.org/manual/core/sharded-cluster-metadata/
[11:59:15] <_boot> hmm, i'll go and get db stats from the testing config server
[12:03:37] <Mech0z> any reason why http://pastebin.com/YZJfyi25 is saved as an empty array when I save an object thas has for example TradeStatus = TradeStatus.AwaitingConfirmation ?
[12:03:44] <Derick> pesky PHP uses [] for both {} and [] :-)
[12:04:17] <Nodex> you can force it to use a {} in the json_encode iirc
[12:05:11] <liquid-silence> can someone to my why db.assets.find({ parent: null, deleted: false, 'group_permissions.group_id': ObjectId('52fc9c920000008d06000002') }) returns the correct data but with mongoose Asset.find({ parent: null, deleted: false, 'group_permissions.group_id': new mongoose.Types.ObjectId('52fc9c920000008d06000002')}). returns nothing?
[12:23:32] <Mech0z> Cant you save a Enum in mongo which has none default values in C#
[12:24:03] <Mech0z> every object I try to save where values are defined so Created = 1, Saved = 2 gives me {} in my collection
[12:26:56] <trbs2> is there a way to inspect the freelist ? mydb.stats().extentFreeList as per jstests/extent2.js does not seem to exist in my database
[12:28:18] <trbs2> likewise mydb["$freelist"].stats() gives "ns not found"
[12:30:52] <Derick> trbs2: it might be a debug-only feature
[14:30:25] <RandomIRCGuy> Good day. Can someone clear me up about a good collection design for someone who tries to learn mongodb? I want to save logs from applications into the mongodb and write a logviewer for it. Ive thought about creating a new collection for each server, or should I save every log, from every server in one big collection and then filter as I want?
[14:30:29] <talbott> but all of the tutorials i've found seem to use exmaples where the 3 mongo instances are on the same cost
[14:36:48] <RandomIRCGuy> Nodex: Allright. So I really should just add the server-name in each log-document?
[14:44:14] <talbott> ah so it's done with keyfiles
[14:46:26] <Nodex> RandomIRCGuy : the general consensus is write your app to your database not VV, fit the data in the best possible way to query it
[14:46:58] <Nodex> for example if your reads are heavier than your writes you should optimise the reads to have the least amount of queries to rech an endpoint
[14:51:16] <RandomIRCGuy> Nodex: Yeah that should be obvious. I want to learn how I should use the MongoDB, it's a new start for me after working ~10 years with MySQL.
[14:51:49] <Nodex> best advice I can give you is forget everything you know about relational databases
[14:52:31] <RandomIRCGuy> I want to filter between servers and customers, and I think it's getting hard to filter all these stuff when documents are stored in different collections. :p
[14:52:38] <RandomIRCGuy> Yeah I guess you are right. Thanks for the tip :)
[14:53:25] <Nodex> personaly I would have one collection for the logs, perhaps a days worth of logs per document per server (dependant on the size of the logs)
[14:55:09] <RandomIRCGuy> Well I want to document a lot.. We have ~20 servers, and 300 customers. Each request should get logged. You see why I'm struggling a bit here with the design pattern ..
[14:57:22] <Nodex> I had a similar situation, I ended up with a "history" collection but it grew rapidly, every single request was logged and it grew about 5 million a day or somehting
[14:57:55] <Nodex> I ended up havign to aggregate the logs down on an hourly basis and removing the aggregations after backing up the raw json export of it
[14:59:38] <RandomIRCGuy> I think it's not that much, thankfully we fix errors pretty fast. I think a week of errors, or a pre-defined amount of them should still keep the server alive. But I don't know if I can really trust a simple server about handling that kind of amount..
[15:01:19] <RandomIRCGuy> are you still using this logging feature and is it working properly?
[15:13:38] <Nodex> yes but I had to turn stop logging crawlers as they were killing my disk usage
[15:14:26] <Nodex> sometimes the aggregation job would lag behind due to locking on the database when writing and certain parts of the system would freeze in a lock state
[15:21:49] <RandomIRCGuy> Oh yes the crawlers, you're right. I have to ignore them too. Thanks for that. I guess we don't have that much requests. We will see. Learning by doing. :)
[17:35:49] <ghais> hi, this morning one of our shards in a 3 shard cluster started showing locks of 100% under very very light load for both primary and secondary node in that shard. The cluster used to handle ~6000 op/s on average with no problem. The logs don't show any keyupdates that would cause long locks and instead almost every write/update is taking 5 seconds to finish. Anyone ever experience anything like this. Can you point me at something to
[18:06:57] <Somatt_wrk> I am new to mongo and fiddling with it. I found out how to find an item A by the id of an item B in a collection it contains. Now I'd like to get a direct reference to that item B. Is there an easy way to do so ?
[18:09:26] <Mau_> Hi guys, I'm wondering if I'm the only one with strange "toPercentStr" and "zeroPad" values in my MapReduce results set and how to solve my problem..
[22:10:07] <thedodd> Anyone?? In-memory mongo? Perhaps something similar to Django's test DB??
[22:17:09] <JustAnyone> In-Memory mongo implies some things: non-persistence and speed. I can envision if you set up a ramdisk and make that the storage location, it would be in-memory only.
[22:18:35] <JustAnyone> Otherwise, you could (for testing purposes) instantiate a mongod where the storage location was something like /tmp/deleteMe.Now in your test startup, run your tests, then tear down the mongod and rm the /tmp/deleteMe.Now.
[22:28:58] <thedodd> JustAnyone, I can agree with your last statement.
[22:29:17] <thedodd> I use the pymongo driver to interface with mongodb via python.
[22:30:11] <thedodd> I find myself in need of a uniform and consistent way to run tests without having to standup and teardown mongo instances all the time.
[22:31:02] <thedodd> JustAnyone, Django has a test-db system, which is uber useful, and I've been considering how useful something of that nature would be for mongo + python.
[23:18:35] <JustAnyone> Manganeez: I was prepared for the Urban dictionary to return something nsfw, but it just comes down to moving fast. Alas, nothing creative.
[23:20:47] <Manganeez> I have a problem. I have a celery app, and a constraint is that the broker has to be MongoDB (via ObjectRocket). celery depends on a message-queue library called kombu, and due to previous bugs, we have to be on the current. That depends on pymongo>=2.6.3. We also use MongoDB for other storage needs. Meanwihle, the front-end for this thing is tornado, which needs to access that storage, too, so we have motor. motor has a strange URL dependency on a
[23:20:47] <Manganeez> specific, unreleased, commit that works out to saying it's 2.5. So basically, motor's weird, outdated dependency breaks the dependency graph of the whole app.
[23:32:06] <JustAnyone> condolences. If all else fails (as they say), bring it all in-house and make your own mods. Or, make a github fork and fix it yourself, then depend on your github versions.
[23:41:58] <Manganeez> That's what I feared. No release on the horizon to resolve that weirdness, I guess/
[23:48:20] <cheeser> isn't there a motor mailing list?
[23:48:29] <cheeser> at the least mongodb-user could help