PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 13th of February, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:22:52] <tongcx> Jonathan1cClare: so the problem i had for combining data before query is more like to know the ability of mongo
[00:41:04] <sheki> hey
[00:41:21] <sheki> a query with $in does not use an index
[00:41:25] <sheki> if the value is a string
[00:41:34] <sheki> -> is this true
[00:41:34] <sheki> ?
[00:44:22] <joannac> sheki: http://pastebin.com/4JYK1Ry9
[00:47:15] <sheki> so indexOnly is false
[00:47:26] <sheki> that means that the index is not being used right?
[00:47:30] <joannac> No
[00:47:35] <joannac> indexOnly is false because you can't just use the index
[00:47:42] <joannac> the index doesn't store the _id field
[00:48:01] <sheki> ok, makes sense.
[00:48:05] <joannac> "cursor" : "BtreeCursor a_1 multi" <--- that's using an index
[02:05:45] <IVBaker> Hello, I'm running a replica sharded cluster. My problem is for the mongos instance, how can I change the default directory for the journal? I tried --dbpath but this option is not recognized for mongos
[02:07:30] <kelseyhightower> Can any one help with killing cursor without timeouts
[02:08:13] <kelseyhightower> Bunch of bad clients opened tons of cursors and did not clean up or close cursor
[02:11:31] <harttho> Got a sharding/indexing question: I have an object with 3 fields (A B and C) and the collection is sharded on A and B, and indexed on C. I want to query for all C's where C=c
[02:12:05] <harttho> If I go query on just C, I don't get to use the shard key. if I query on A and B, I don't get to use the C index. What happens if i query on A, B, and C
[02:13:01] <harttho> TLDR: Can I use query on an index that isn't a shard key, but still benefit from querying with the data in the shard key
[03:17:11] <IVBaker> Hello, I'm running a replica sharded cluster. My problem is for the mongos instance, how can I change the default directory for the journal? I tried --dbpath but this option is not recognized for mongos
[03:18:17] <IVBaker> my problem is that I'm really limited in the /var/lib/mongodb folder
[03:20:07] <joannac> Umm, I didn't think mongos kept a journal
[03:37:21] <a|3x> i seem to have issues using eval() in auth = true mode, http://docs.mongodb.org/manual/reference/user-privileges/#auth-role-combined says some stuff about this but it seems no matter what i give, i always get the unauthorized error, would someone please shine some line on the matter?
[03:40:21] <joannac> what privileges does your user have in the admin database?
[04:38:54] <a|3x> joannac, none, i only added the user to this one particular database
[05:53:10] <joannac> a|3x: that's why it doesn't work
[05:53:46] <joannac> check the docs "... on the admin database"
[05:54:38] <a|3x> i believe i tried it on that one as well without any luck, but i'll try again to double check
[06:14:32] <joannac> a|3x: use admin; db.system.users.find()
[06:14:47] <joannac> make sure it has all 4 of those permissions
[07:22:52] <Scud_> hey guys im trying to get a baseline benchmark for my setup using benchRun and was wondering two things. 1) can i dispatch concurrent operations without using threads in benchrun and 2) if so, can i limit the number of concurrent operation to a fixed amount. thanks in advance
[08:13:48] <Mech0z> When using the c# driver, after I have used the classmap, is it possible to delete a object in a collection by id without knowing the id fields name?
[08:14:21] <Mech0z> I am using an abstract generic class to delete with, so I dont know the type
[10:26:15] <agend> hi
[10:28:03] <agend> what is the best way - to make something like bulk upsert? i have plenty of updates with $inc to documents which may exist or not - and so far I have to make one by one upsert of them. Was thinking about using mongoimport - but there is no way to use $inc. Is it possible to make some js scipt which would be executed by mongo and make it fast?
[10:37:38] <_boot> are there any known issues with removing shards on 2.5.4?
[10:55:06] <mefyl> Hi everyone
[10:56:11] <mefyl> short question: using an aggregation with $group, I can't figure out how to access documents themselves and not one of their field with '$field'
[10:56:44] <mefyl> if I want to group transactions per day for instance: transactions.aggregate({'_id': '$date', 'transactions': {'$push': <the transaction itself>}})
[10:57:35] <mefyl> only way I found to do it is to re-list all transaction fields {'name': '$name', 'user': '$user', ...} which kinda sucks
[11:32:34] <_boot> mefyl, I think I was trying something similar a while back, and I needed to wait for a 2.6 feature where you can push $$ROOT or something as the whole document
[11:38:48] <MmikePoso> Hi, all.
[11:38:48] <mefyl> _boot: thanks, at least I know it's not me being an idiot :)
[11:39:01] <MmikePoso> If I have mongodb replica secondary that lags cca 60k seconds - is that node serving stale data?
[11:39:06] <mefyl> I'm quite surprised though, seems to be a very common use case
[11:49:58] <_boot> what kind of hardware specs should I use for config servers in a sharding setup?
[11:50:22] <ron> something with a cpu comes to mind.
[11:53:00] <_boot> not very helpful
[11:54:40] <ron> sorry :)
[11:54:51] <hierbabuena> _boot: They are pretty lighweigth. Use the cheaper you can get.
[11:55:40] <_boot> any idea of their memory requirements? can't find much on this on Google other than one person saying they had an issue with 1GB of RAM
[11:57:43] <Nodex> Derick : you have a typo in one of your blogs http://derickrethans.nl/mongodb-arbitrary-key-names.html
[11:58:21] <Derick> Nodex: https://github.com/derickr/derickrethans-articles/edit/master/201402110941-mongodb-arbitrary-key-names.rst ;-)
[11:58:26] <hierbabuena> _boot: Memory requirements would depend on your dataset, but take a look at what is stored: http://docs.mongodb.org/manual/core/sharded-cluster-metadata/
[11:59:15] <_boot> hmm, i'll go and get db stats from the testing config server
[11:59:24] <Nodex> done ;)
[12:00:01] <Derick> Nodex: i don't see the PR?
[12:00:37] <ron> Nodex: pm?
[12:00:50] <Nodex> ron: sure
[12:01:04] <Derick> Nodex: I don't think you're right there
[12:01:13] <Nodex> sorry Derick : forgot to send the pr
[12:01:26] <Derick> but you're wrong too :P
[12:01:30] <Nodex> it's invalid json
[12:01:44] <Derick> yes, but your change is not right either
[12:01:47] <_boot> datasize + index size is less than 2MB for the `config` db lol, does 512MB of RAM sound alright?
[12:01:59] <Nodex> it is
[12:01:59] <hierbabuena> Yup :)
[12:02:06] <_boot> cool
[12:02:06] <Derick> Nodex: no, it is not what I intended :P
[12:02:08] <_boot> cheers
[12:02:44] <Nodex> foo : [{"20140101":"1"},{....}] ... foo.20140101
[12:02:48] <Derick> no
[12:02:54] <Derick> {
[12:02:54] <Derick> person: "derickr",
[12:02:54] <Derick> steps_made: {
[12:02:54] <Derick> "20140201": 10800,
[12:02:54] <Derick> "20140202": 5906,
[12:02:57] <Derick> }
[12:02:59] <Derick> is what I meant
[12:03:01] <Nodex> [1:1] <--- invalid json
[12:03:14] <Nodex> yes but it's an invalid structure
[12:03:22] <Nodex> oh sorry yes I see now
[12:03:29] <Nodex> {} vs []
[12:03:33] <Derick> :-)
[12:03:37] <Mech0z> any reason why http://pastebin.com/YZJfyi25 is saved as an empty array when I save an object thas has for example TradeStatus = TradeStatus.AwaitingConfirmation ?
[12:03:44] <Derick> pesky PHP uses [] for both {} and [] :-)
[12:04:17] <Nodex> you can force it to use a {} in the json_encode iirc
[12:04:26] <Derick> yeah, but it was an array
[12:04:31] <Derick> PHP arrays are [ ]
[12:05:11] <liquid-silence> can someone to my why db.assets.find({ parent: null, deleted: false, 'group_permissions.group_id': ObjectId('52fc9c920000008d06000002') }) returns the correct data but with mongoose Asset.find({ parent: null, deleted: false, 'group_permissions.group_id': new mongoose.Types.ObjectId('52fc9c920000008d06000002')}). returns nothing?
[12:06:19] <Nodex> echo(json_encode(array('2014'=>1,'2013'=>2),JSON_FORCE_OBJECT)); -> {"2014":1,"2013":2}
[12:06:20] <Nodex> :)
[12:07:07] <Derick> sure, but it was originally PHP *syntax*
[12:07:11] <Derick> not encoded/decoded JSON
[12:07:24] <Nodex> ah sorry, I thought it got run through some json encoding, my bad
[12:08:27] <liquid-silence> @Derick care to look at my issue
[12:08:28] <liquid-silence> ?
[12:09:40] <Nodex> liquid-silence : have you logged the output of the ObjectID?
[12:11:29] <liquid-silence> ok this is strange
[12:11:30] <liquid-silence> 52fc9c920000008d06000002
[12:11:33] <liquid-silence> that is what it logs?
[12:11:47] <Nodex> then it't not casting it properly
[12:12:32] <liquid-silence> wtf
[12:23:32] <Mech0z> Cant you save a Enum in mongo which has none default values in C#
[12:24:03] <Mech0z> every object I try to save where values are defined so Created = 1, Saved = 2 gives me {} in my collection
[12:26:56] <trbs2> is there a way to inspect the freelist ? mydb.stats().extentFreeList as per jstests/extent2.js does not seem to exist in my database
[12:28:18] <trbs2> likewise mydb["$freelist"].stats() gives "ns not found"
[12:30:52] <Derick> trbs2: it might be a debug-only feature
[12:33:46] <trbs2> Derick, ok
[12:35:11] <trbs2> is diskspace of consecutive deleted documents free'd as a whole or always per document
[12:36:52] <Derick> always per document
[12:37:33] <trbs2> ok so if on average the documents are always growing it makes sense that mongodb is not reusing previously free'd space
[12:37:52] <trbs2> i turned powerof2 on, hoping that that will help
[12:38:26] <trbs2> but i guess i need to compact first to reuse the previously free'd space
[12:45:29] <liquid-silence> Nodex this is making no sense :(
[12:46:12] <liquid-silence> although the group id comes from an object in redis
[12:46:21] <liquid-silence> I cannot see why its not working
[12:58:11] <Nodex> liquid-silence : your mongoose is probably not casting it corretly
[12:58:28] <Nodex> Redis won't store the type deinition of it as that resides in memory at object creation time
[14:29:34] <talbott> hello
[14:29:40] <talbott> how are you guys?
[14:29:55] <talbott> i have a bit of a newb question
[14:30:04] <talbott> i am trying to setup rep
[14:30:25] <RandomIRCGuy> Good day. Can someone clear me up about a good collection design for someone who tries to learn mongodb? I want to save logs from applications into the mongodb and write a logviewer for it. Ive thought about creating a new collection for each server, or should I save every log, from every server in one big collection and then filter as I want?
[14:30:29] <talbott> but all of the tutorials i've found seem to use exmaples where the 3 mongo instances are on the same cost
[14:30:33] <talbott> cost = host
[14:30:41] <RandomIRCGuy> I only know a good design pattern from mysql, which is now a big change for me
[14:30:55] <talbott> if you're dbs are on different hosts - and you specify the fqdn in the rep config
[14:31:09] <talbott> how does mongo authenticate between the instances?
[14:34:04] <talbott> i cant find a tutorial that talks about setting that up
[14:34:23] <talbott> but maybe it's obvious and im just a numpty
[14:35:47] <Nodex> RandomIRCGuy : think of collections as tables in an SQL set up
[14:35:56] <Nodex> and documents as rows
[14:36:48] <RandomIRCGuy> Nodex: Allright. So I really should just add the server-name in each log-document?
[14:44:14] <talbott> ah so it's done with keyfiles
[14:46:26] <Nodex> RandomIRCGuy : the general consensus is write your app to your database not VV, fit the data in the best possible way to query it
[14:46:58] <Nodex> for example if your reads are heavier than your writes you should optimise the reads to have the least amount of queries to rech an endpoint
[14:47:01] <Nodex> reach*
[14:51:16] <RandomIRCGuy> Nodex: Yeah that should be obvious. I want to learn how I should use the MongoDB, it's a new start for me after working ~10 years with MySQL.
[14:51:49] <Nodex> best advice I can give you is forget everything you know about relational databases
[14:52:31] <RandomIRCGuy> I want to filter between servers and customers, and I think it's getting hard to filter all these stuff when documents are stored in different collections. :p
[14:52:38] <RandomIRCGuy> Yeah I guess you are right. Thanks for the tip :)
[14:53:25] <Nodex> personaly I would have one collection for the logs, perhaps a days worth of logs per document per server (dependant on the size of the logs)
[14:55:09] <RandomIRCGuy> Well I want to document a lot.. We have ~20 servers, and 300 customers. Each request should get logged. You see why I'm struggling a bit here with the design pattern ..
[14:57:22] <Nodex> I had a similar situation, I ended up with a "history" collection but it grew rapidly, every single request was logged and it grew about 5 million a day or somehting
[14:57:55] <Nodex> I ended up havign to aggregate the logs down on an hourly basis and removing the aggregations after backing up the raw json export of it
[14:59:38] <RandomIRCGuy> I think it's not that much, thankfully we fix errors pretty fast. I think a week of errors, or a pre-defined amount of them should still keep the server alive. But I don't know if I can really trust a simple server about handling that kind of amount..
[15:01:19] <RandomIRCGuy> are you still using this logging feature and is it working properly?
[15:13:38] <Nodex> yes but I had to turn stop logging crawlers as they were killing my disk usage
[15:14:26] <Nodex> sometimes the aggregation job would lag behind due to locking on the database when writing and certain parts of the system would freeze in a lock state
[15:21:49] <RandomIRCGuy> Oh yes the crawlers, you're right. I have to ignore them too. Thanks for that. I guess we don't have that much requests. We will see. Learning by doing. :)
[17:35:49] <ghais> hi, this morning one of our shards in a 3 shard cluster started showing locks of 100% under very very light load for both primary and secondary node in that shard. The cluster used to handle ~6000 op/s on average with no problem. The logs don't show any keyupdates that would cause long locks and instead almost every write/update is taking 5 seconds to finish. Anyone ever experience anything like this. Can you point me at something to
[17:35:49] <ghais> investigate?
[17:39:02] <ghais> here is the relevant part of an update operation in the log "idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) w:732 5582ms"
[17:39:19] <ghais> the object being updated is generally small and less than 500 bytes
[18:05:42] <Somatt_wrk> hi
[18:06:57] <Somatt_wrk> I am new to mongo and fiddling with it. I found out how to find an item A by the id of an item B in a collection it contains. Now I'd like to get a direct reference to that item B. Is there an easy way to do so ?
[18:09:26] <Mau_> Hi guys, I'm wondering if I'm the only one with strange "toPercentStr" and "zeroPad" values in my MapReduce results set and how to solve my problem..
[19:09:12] <Mau_> Thanks :')
[19:20:40] <Moon_Man> I'm using mongoose, and I'm wondering how I can declare multiple nested subodocuments in a schema. Can anyone help me?
[19:20:52] <Moon_Man> I asked in mongooses, they're pretty dead over there.
[19:38:38] <Moon_Man> StackExchange question regarding mongoose/mongodb. Can anyone help? Please? http://stackoverflow.com/questions/21763827/adding-support-for-multiple-subdocuments-within-a-mongoose-schema
[21:14:38] <JustAnyone> Question: Can I update a field by adding text to an existing string field ?
[21:18:12] <cheeser> you have to replace the old value with the appended value
[22:02:18] <thedodd> Is anyone familiar with an in-memory mongo which could be used for unit testing in Python?
[22:02:42] <thedodd> It looks like there are a few small things here and there, like Ming's mim.
[22:03:01] <thedodd> I'm looking for something generic, however.
[22:03:05] <thedodd> Any ideas??
[22:10:07] <thedodd> Anyone?? In-memory mongo? Perhaps something similar to Django's test DB??
[22:17:09] <JustAnyone> In-Memory mongo implies some things: non-persistence and speed. I can envision if you set up a ramdisk and make that the storage location, it would be in-memory only.
[22:18:35] <JustAnyone> Otherwise, you could (for testing purposes) instantiate a mongod where the storage location was something like /tmp/deleteMe.Now in your test startup, run your tests, then tear down the mongod and rm the /tmp/deleteMe.Now.
[22:28:58] <thedodd> JustAnyone, I can agree with your last statement.
[22:29:17] <thedodd> I use the pymongo driver to interface with mongodb via python.
[22:30:11] <thedodd> I find myself in need of a uniform and consistent way to run tests without having to standup and teardown mongo instances all the time.
[22:31:02] <thedodd> JustAnyone, Django has a test-db system, which is uber useful, and I've been considering how useful something of that nature would be for mongo + python.
[22:34:12] <jfhernandeze> Hi
[22:34:55] <jfhernandeze> I want to start using mongo db but I am confused about scheme design
[22:35:40] <jfhernandeze> for example: a POS application in mysql I would have a table named sales and another named sales_products
[22:36:42] <jfhernandeze> In mongo i am confused about if I have to do a db.sales.insert({total:10,date:'2014-01-01', products:[{},{},{}.....]});
[22:37:46] <raptor_> After setting up replication servers and connecting the replica my configuration server
[22:37:49] <jfhernandeze> or do I have to create another db.saleproducts in order to query that db
[22:37:51] <raptor_> I am receiving the following message
[22:37:54] <raptor_> { "ok" : 0, "errmsg" : "server is not running with --replSet"
[22:39:08] <raptor_> I am on a centos box
[22:39:15] <raptor_> and I started up the first replica with the option
[22:39:15] <raptor_> mongod --config /etc/mongod.conf
[22:39:26] <raptor_> inside of mongod.conf I set the replSet option
[22:39:31] <raptor_> I am not sure what I am missing here
[22:39:44] <raptor_> I already tried to restart all the services and checked the permissions across the cluster
[22:41:01] <raptor_> Anyone able to help me with something basic that I missed
[23:13:11] <Manganeez> Hello, all. Is this an appropriate place to ask a motor question?
[23:15:37] <JustAnyone> Manganeez: If you're talking about motorboating, this is the wrong chatroom. I suggest 4chan.
[23:15:47] <Manganeez> :)
[23:17:00] <JustAnyone> "motoring...." http://www.azlyrics.com/lyrics/nightranger/sisterchristian.html
[23:18:35] <JustAnyone> Manganeez: I was prepared for the Urban dictionary to return something nsfw, but it just comes down to moving fast. Alas, nothing creative.
[23:20:47] <Manganeez> I have a problem. I have a celery app, and a constraint is that the broker has to be MongoDB (via ObjectRocket). celery depends on a message-queue library called kombu, and due to previous bugs, we have to be on the current. That depends on pymongo>=2.6.3. We also use MongoDB for other storage needs. Meanwihle, the front-end for this thing is tornado, which needs to access that storage, too, so we have motor. motor has a strange URL dependency on a
[23:20:47] <Manganeez> specific, unreleased, commit that works out to saying it's 2.5. So basically, motor's weird, outdated dependency breaks the dependency graph of the whole app.
[23:32:06] <JustAnyone> condolences. If all else fails (as they say), bring it all in-house and make your own mods. Or, make a github fork and fix it yourself, then depend on your github versions.
[23:41:58] <Manganeez> That's what I feared. No release on the horizon to resolve that weirdness, I guess/
[23:48:20] <cheeser> isn't there a motor mailing list?
[23:48:29] <cheeser> at the least mongodb-user could help
[23:51:39] <ramnes> hi
[23:52:13] <ramnes> host:PRIMARY> db.user.update({"subdoc":{$exists:1}}, {$set:{"subdoc.subsubdoc":{"field":true}}}, {multi:1})
[23:52:28] <ramnes> LEFT_SUBFIELD only supports Object: subdoc not: 10
[23:53:13] <ramnes> any idea ?
[23:53:54] <ramnes> note: I got the error with mongo 2.4.8, but it's working with 2.4.1