PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 17th of February, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:40] <Waheedi> ok now this is my state
[00:01:50] <Waheedi> */dev/xvda1 1008G 957G 351M 100% /
[00:02:00] <Waheedi> df -h
[00:03:23] <Waheedi> one question here. why when i delete content from mongo it does not actually free space from the actual space
[00:03:38] <Waheedi> is that a disaster recovery kind of thing?
[00:03:54] <joannac> Waheedi: yes, that's right. allocation is expensive
[00:04:17] <Waheedi> joannac: man honestly that is the worst way to solve it
[00:04:32] <Waheedi> sorry for expressing my self here but you see this /dev/xvda1 1008G 957G 351M 100% /
[00:04:37] <joannac> Waheedi: well, your data is fragmented
[00:04:54] <Waheedi> 950G of the space is for mongo
[00:05:08] <Waheedi> while 70% of the content is deleted
[00:06:06] <joannac> find some more space and repair
[00:06:23] <Waheedi> some more space means 1TB
[00:06:44] <Waheedi> for only 200G of real data needed to recover
[00:07:02] <Waheedi> the rest is bullshit as you know there is more bullshit these days
[00:07:27] <joannac> usePowerOf2Sizes might help with better space reuse in the future
[00:08:46] <Waheedi> Good to know about it
[04:18:11] <user90218> does anyone know if there's a table of which mongo PHP driver versions work with a given version of mongodb?
[04:23:30] <Waheedi> you can get started here user90218 http://docs.mongodb.org/ecosystem/drivers/php/
[04:24:16] <Waheedi> and u can a better nick name using /nick mynewfancynickname
[04:24:23] <Waheedi> use*
[04:30:21] <user90218> thanks, waheedi :)
[06:29:55] <russum> Hi, I just did a db.setProfilingLevel(2) on a small collection (~1k documents) and then when I did db.system.profile.find() - mongo.exe would just crash, that was 2.4.4, I now updated to 2.4.9 and it does not crash anymore but the following error shows up:
[06:29:55] <russum> > db['system.profile'].findOne()
[06:29:55] <russum> Mon Feb 17 01:19:47.695 Error: 16863 Error converting /sta/ui in field CRN to a JS RegExp object: SyntaxError: Invalid flags supplied to RegExp constructor 'ui' at src/mongo/shell/types.js:612
[06:30:42] <russum> the CRN field has data in the following format: "123.201400" if that matters…
[07:01:44] <daniel-s> Do people normally use mongodb from the execuatable, or through bindings from another language, like PHP or Python?
[07:02:05] <Garo__> daniel-s: what do you mean?
[07:02:30] <Garo__> daniel-s: php, python etc have their own mongodb drivers which are used to connect to the database from code
[07:02:51] <daniel-s> I think that's the word I should have used, driver, not bindings.
[07:03:20] <daniel-s> I mean, I will sometimes access my databases through the mongo executable, if I want to manually debug something or look through data.
[07:03:29] <Garo__> yeah sure, that's normal
[07:03:31] <daniel-s> Otherwise, I only ever access it through the Python driver.
[07:05:23] <Garo__> daniel-s: you are doing it right then
[07:05:44] <daniel-s> OK.
[07:06:13] <Garo__> may I ask why you thought that that wasn't normal?
[08:06:32] <rgawdzik> In linux, on htop, there are multiple processes of mongod; are these individual processes, or some sort of processor sharding?
[08:15:46] <Garo__> rgawdzik: they're threads from your mongodb instance
[08:16:03] <Garo__> q1~
[08:16:13] <rgawdzik> Garo__: Thanks, I thought for a second that multiple mongodb's were up
[08:16:40] <Garo__> rgawdzik: you see that they all have the exact same amount of VIRT, RES and SHR memory
[08:17:13] <Garo__> that tells you that they are threads as they share the memory of their parent process
[08:39:23] <[AD]Turbo> yo all
[10:34:50] <orweinberger> I just pushed a few millions of documents to a collection and only then enabled sharding on it. I would like to check the status of the sharding, I'm using sh.status() and I can see that I have chunks: shard0001 - 1, shard0002 - 229. The third shard does not appear on that list. When I run sh.isBalancerRunning() I get 'true'. What should be these outputs when shard balancing is complete?
[10:37:36] <kali> orweinberger: well, it should be roughly even, but it can take hours to balance
[10:38:01] <kali> orweinberger: there is a changelog collection in the config database that you can check
[10:39:57] <orweinberger> kali, is it possible to know how many documents I have on each mongod instance? When I try to connect to one (using umongo) I see the entire lot of documents, not only what's on that specific shard
[10:41:29] <kali> orweinberger: well, db.collection.getShardingDistribution() should give you that
[10:42:13] <kali> orweinberger: but what you describe sounds weird. if you cannect to the mongod hosting a shard and query a sharded collection, you should only see a part of the collection
[10:44:07] <orweinberger> kali, yes, you're right I was wrong about that part.
[10:44:12] <orweinberger> I can see it working now, thanks :)
[10:50:08] <Gleb`> Can I query a model and select objects referenced by id at the same time? So that i can do something like user.name.first_name where name is an object referenced by ObjectId on the user
[11:56:48] <gmg85> I would like to query for a field in several documents of the same type and get the field values returned in an array
[11:57:01] <gmg85> is it possible to do this without looping?
[12:01:54] <BurtyB> gmg85, depending on what you want distinct would return them in an array
[12:08:17] <gmg85> BurtyB, is it possible to use distinct like this db.collection.distinct('field1',{field2:field2Val},function(err,arr){})
[12:08:17] <gmg85> ?
[12:10:05] <BurtyB> gmg85, it just takes a field and the query
[12:13:21] <orweinberger> I have a 14GB mongo data dump taken from a separate database. Now I have a new set of 3xcfg servers, 1xmongos, 3xmongod that I want to load the data dump to. However, I would like to do so in a way that will push all the data to the shards respectively. I want to avoid pushing all data to 1 mongod and then to have the mongos rebalance everything. Is there a way for me to do it?
[12:13:54] <gmg85> BurtyB, Thanks
[12:34:06] <MmikePoso> Hello! Is there an easy way to 'replay' traffic recorded against mongod instance? I'm recording the traffic with mongosniff.
[12:47:12] <Garo__> MmikePoso: the mongodb wire protocol is quite simple. if you know node.js then you can take a look this tool which I've written. it's not ready for your problem, but you should be able to modify it quite easily to your needs: https://github.com/garo/node-mongodb-wire-analyzer
[12:51:03] <orweinberger> I have a 14GB mongo data dump taken from a separate database. Now I have a new set of 3xcfg servers, 1xmongos, 3xmongod that I want to load the data dump to. However, I would like to do so in a way that will push all the data to the shards respectively. I want to avoid pushing all data to 1 mongod and then to have the mongos rebalance everything. Is there a way for me to do it?
[12:53:25] <kali> orweinberger: i think that if you call enableShard and shardCollection before running mongorestore, it should do just that (might work better without the --drop option but i'm not sure)
[12:54:41] <orweinberger> kali but the db is not yet created on the new infrastructure, so I cannot run shardCollection. If I create it before restoring and add a 'dummy' record with the same structure as the real data, would that solve it? Will I be able to restore it if I do so?
[12:56:44] <kali> orweinberger: I don't think you need to go through creating dummy stuff. just making the call on non-existing database should be enough to create the metadata in the configuration database
[12:58:31] <orweinberger> kali, thanks a lot, I'll give it a try now! :)
[13:04:54] <mboman> Anyone know of any (cheap) mongodb hosting for large collections (TB of data)
[13:05:41] <kali> TB and cheap in the same sentence ? :)
[13:06:45] <mboman> kali, one can always dream...
[13:07:01] <kali> sure
[13:07:11] <mboman> kali, and cheap is a relative question
[13:18:22] <Cahu> hi, I need to apply a function (javascript) to a particular entry in a document matching a query and update the value with the output of that function
[13:18:29] <Cahu> what would be the best way to do this?
[13:23:20] <Nodex> Cahu : in your appside code
[13:24:12] <Cahu> Nodex, I'd like to avoid that, if the whole thing happened within mongo that would be perfect
[13:24:25] <MmikePoso> Garo__: thnx, I'll check it out. I'm trying to figure out why my datadir goes kaputt, so I have to repair it quite often
[13:24:31] <Cahu> the 'eval' method could be what I am looking for
[13:32:39] <Nodex> Cahu : not possible
[13:33:44] <Cahu> Nodex, why?
[13:35:55] <Nodex> because evaluating javascript is insecure and mongo doesn't do it
[13:38:49] <orweinberger> kali, it works! thanks
[13:47:13] <|Lupin|> Hello everybody
[13:47:22] <|Lupin|> I'm rying to enable text searchonmy server
[13:47:28] <|Lupin|> I tried adding
[13:47:34] <|Lupin|> enableTextSearch=true
[13:47:35] <|Lupin|> and
[13:47:49] <|Lupin|> setParamter enableTextSearch
[13:47:54] <|Lupin|> to /etc/mongodb.conf
[13:47:58] <|Lupin|> but none works..
[13:48:07] <|Lupin|> Could somebody please help tofind the ight syntax?
[13:55:27] <kali> luca3m: i think: "setParameter enableTextSearch=true"
[13:56:00] <kali> |Lupin|: that was for you, not for luca3m (sorry)
[14:03:19] <|Lupin|> kali: thanksç I think I already tried it butlet's retryjust to make sure;..
[14:05:43] <|Lupin|> kali: nope, does not wok...
[14:06:59] <Nodex> what mongod version?
[14:07:11] <kali> luca3m: i think: "setParameter = enableTextSearch=true"
[14:07:34] <kali> rhaaaaa :)
[14:07:42] <kali> |Lupin|, not luca3m, again
[14:17:49] <|Lupin|> Nodex: 2.4.7
[14:17:55] <|Lupin|> kali: okay let me try that one...
[14:20:39] <|Lupin|> ok that still does not work but with a different error message:
[14:20:39] <|Lupin|> Illegal --setParameter parameter: "enableTextSearch"
[14:21:18] <|Lupin|> ii mongodb-10gen 2.4.7 An object/document-oriented database
[14:27:10] <_boot> has anyone else experienced terrible export speeds when talking to a mongos?
[14:28:25] <_boot> it's not showing in the currentOp either, I'm running with a query and it's indexed
[15:06:20] <bobinator60> is it possible to make a covering index for this document http://bpaste.net/show/VOZu6n4hntSaNdSHgo5g/ with $elemMatch (kind:foo, value:bar) $AND owner_id:someObjectID?, and return just the _ids of those that match?
[15:08:10] <bobinator60> i have this index, but it does a table scan http://bpaste.net/show/7ryocDFMcD6TURWRgjTG/
[15:12:06] <Nodex> the index should be {kind:1,value:1,owner_id:1}
[15:12:28] <Nodex> sorry
[15:12:42] <Nodex> the index should be {"attributes.kind":1,"attributes.value":1,owner_id:1}
[15:13:16] <bobinator60> i tried that too, and it still scanned the disk for the ids
[15:13:56] <bobinator60> i'm looking at this in the docs: " If an indexed field is an array, the index becomes a multi-key index index and cannot support a covered query"
[15:14:32] <bobinator60> but I can't tell if its referring to a plain array [a,b,c] or an array of embedded docs, or both
[15:14:48] <Nodex> an array of embedded docs is an object not an array
[15:15:04] <Nodex> attributes : {} vs attributes:[]
[15:15:47] <bobinator60> Nodex: so are you saying it should work? then why is indexOnly: False ?
[15:16:21] <bobinator60> here's the query & the query plan http://bpaste.net/show/0mDdh2ZBPXbvOm9hwcjP/
[15:17:53] <Nodex> I am saying it needs to be an object not an array
[15:19:27] <bobinator60> sorry to be so dumb here, but what needs to be an object not an array? i still don't understand what I'm doing wrong
[15:20:15] <Nodex> attributes
[15:21:07] <Nodex> but I imagine that wont fit your data very well
[15:21:30] <bobinator60> no
[15:23:01] <Nodex> is _id in your index?
[15:23:08] <bobinator60> yes
[15:23:21] <bobinator60> and in fact its the only thing i'm interested in in the projectino
[15:23:42] <Nodex> try removing it from the index
[15:23:50] <bobinator60> i jus tdid
[15:27:21] <bobinator60> u'indexOnly': False,
[15:28:02] <bobinator60> its using the right BtreeCursor
[15:28:34] <Nodex> according to one set of docs it's possible, according to another - it's not
[15:29:18] <bobinator60> i will file a support request
[15:29:46] <Killerguy> hi
[15:29:57] <Killerguy> how can I reload shard config on a cluster?
[15:30:02] <bobinator60> or move to FoundationDB
[15:30:14] <Killerguy> because I added a shard that is 1 instance replicat set
[15:30:19] <Nodex> best to use the right tool for the job ;)
[15:30:24] <Killerguy> and now my replicaset is 3 mongo instrances
[15:30:53] <Killerguy> but mongo router keep the shard and 1 instance in the replicatSet
[15:30:53] <bobinator60> Nodex: i have been bit over and over again by these kinds of cases with mongo.
[15:31:51] <|Lupin|> mongoexport --host=elmongodbdev.cines.fr --collelestore.json xmodule 2>&1 | less
[15:32:40] <Nodex> bobinator60 : in 4 or so years I have never had such problems, I model my data differently though so that's probably why
[15:34:18] <bobinator60> Nodex: i would denormalize the data into master/detail if mongo supported transactions
[15:34:38] <quickdry21> Question - when I add a new shard (replica set) to an existing cluster, will the usernames/passwords for each database get set from the other shards, or do I have to do it manually?
[15:35:01] <Nodex> I don't know your data structure so I can't comment. Mongodb doesn't claim to be a silver bullet, perhaps it's model just doesn't fit your app
[15:38:03] <Joeskyyy> quickdry21: The recommended security for sharding and replsets would be a keyfile
[15:38:30] <Joeskyyy> see this doc: http://docs.mongodb.org/manual/core/inter-process-authentication/#replica-set-security
[15:38:41] <Joeskyyy> also thise for sharding
[15:38:41] <Joeskyyy> http://docs.mongodb.org/manual/core/sharded-cluster-security/#sharding-security
[15:40:10] <Joeskyyy> To answer your question though, users should be synced when they're added to the replset
[15:43:55] <quickdry21> Yeah, I have a keyFile set up, but when I mongo host:port, the admin credentials for the rest of the cluster fail to auth
[16:01:03] <|Lupin|> have to go
[16:01:09] <|Lupin|> thanks for everything, guys
[17:09:07] <sflint> trying to import a .csv
[17:09:09] <sflint> ./mongoimport --port 27018 --db acumen --collection clicks --type csv --file ~/Downloads/4320_CountryOutfitter_CMM_Report_10404059_tot20140129.csv --headerline
[17:09:10] <sflint> connected to: 127.0.0.1:27018
[17:09:26] <sflint> but after the connection to mongo...nothing....doesn't create the collection either
[17:09:38] <sflint> this .csv is large around 1.2GB.....
[17:13:35] <jaraco> We have a server in ROLLBACK state. It's stuck there because it has a bunch of data we care about that didn't get synced with the current primary.
[17:13:52] <jaraco> We want to make that host primary again (and forget the other primary was ever promoted).
[17:13:54] <jaraco> Can we do that?
[17:14:25] <sflint> jaraco...you can but you will lose data
[17:14:41] <jaraco> Understood - if you mean we will lose the data that's on the current primary.
[17:14:48] <sflint> correct
[17:14:54] <jaraco> That's what we want.
[17:15:07] <sflint> are you able to have downtime?
[17:15:24] <jaraco> We might try to recover that data manually later, but that's a known risk.
[17:15:36] <jaraco> Yes, downtime is authorized.
[17:15:40] <jaraco> Though we want to make it as short as possible.
[17:15:49] <jaraco> We do have a commercial support contract too.
[17:17:01] <sflint> there are 2 options that I can think of...I have never tried them in practice so you might want to try to test somehow. But I believe if you cahnge the rs.conf() to where the roolback is highest priority.
[17:17:15] <sflint> then take down the cluster
[17:18:13] <sflint> queston...how much data?
[17:18:21] <sflint> 500GB? 1TB?
[17:18:46] <jaraco> < 500GB but close.
[17:20:39] <sflint> I think you are going to have to recreate the cluster seperatly starting with the rollback node.....you could take it out of the set and build a new replica set starting with it as the primary
[17:20:39] <jaraco> sflint: if we set that host to highest priority, will it know not to be in the rollback state?
[17:20:48] <sflint> no
[17:21:13] <jaraco> Can we change it's state manually not to be rollback, such that it starts up as primary (if we've taken the other replicas down)?
[17:21:27] <sflint> i was thinking you could force it to take over...but it won't....mongod will see that it is behind and it won't take over
[17:21:37] <sflint> that is possible i believe
[17:21:42] <sflint> using aribters
[17:21:49] <jaraco> We have one arbiter already.
[17:21:59] <jaraco> And only one other replica (the current primary).
[17:22:10] <jaraco> If we take that primary down, we want the rollback node to come up as primary.
[17:22:35] <sflint> I have never tried that but it makes sense that it would come up as primary
[17:22:54] <jaraco> we'll try that
[17:23:01] <sflint> is there anyway you can take the rollback down
[17:23:12] <sflint> stand it up by itself and recover the data?
[17:23:55] <sflint> if you start it outside of replica set mode you will be able to get to the data on it
[17:26:08] <jaraco> We've done that. Now it's in STARTUP2 state
[17:26:23] <Veejay> Hi, is it possible to clone a collection from one mongo instance to itself? i.e. making a copy of a given collection with another name?
[17:26:40] <jaraco> That is, restarted the rollback node with the other node offline.
[17:26:55] <sflint> Veejay...mongodump
[17:26:56] <Veejay> I've tried using cloneCollection but I got the error message: { "ok" : 0, "errmsg" : "can't cloneCollection from self" }
[17:27:01] <jaraco> Can't elect itself.
[17:27:03] <Veejay> sflint: It's the only way eh?
[17:27:12] <jaraco> Can we force the election for that node?
[17:27:13] <sflint> mongo-connector
[17:27:41] <sflint> you can force a reconfig...that should trigger an election jaraco
[17:27:53] <sflint> rs.reconfig(conf)
[17:27:59] <sflint> conf = rs.conf()
[17:28:12] <sflint> may work
[17:28:40] <jaraco> replSetReconfig command must be set to the current replica set primary
[17:28:41] <sflint> Veejay : mongo-connector is a python script that will let you pipe data from mongo -> mongo
[17:29:02] <sflint> jaraco...rs.reconfig(conf, forcre=True)
[17:29:05] <sflint> i think that is the command
[17:29:08] <sflint> let me check
[17:29:32] <sflint> rs.reconfig(conf, { force: true } )
[17:30:14] <jaraco> thanks. the command took
[17:30:23] <jaraco> still it won't become primary
[17:30:25] <groundup> I have a collection called "features" and in that, each document has {_id, user, place, feature, value} I tried to do feature.ensureIndex({user: -1, place: -1, feature: -1}, {unique}) but I still get duplicates when I do save()
[17:30:38] <sflint> jaraco...stilli n startup?
[17:30:40] <jaraco> startup2
[17:31:09] <jaraco> errmsg: "still syncing, not yet to minValid optime 530241e1:1a"
[17:31:09] <sflint> i would try to take down that node and start it up outside of replicaset mode...
[17:31:20] <groundup> How do I make it so that save() overwrites documents with the unique key without having to know the _id
[17:33:29] <sflint> groundup....only thing i see is {unique:true}
[17:33:39] <sflint> check to make sure the index is unique
[17:33:41] <groundup> d'oh!
[17:33:47] <sflint> feature.getIndexes()
[17:34:06] <jaraco> sflint: started up standalone.
[17:34:15] <jaraco> what next - can we force it to be primary in the RS again?
[17:34:24] <jaraco> rs.reconfigure again? Or restart?
[17:34:31] <sflint> no but I think you could build a new replica set
[17:34:38] <sflint> starting with that node
[17:35:14] <sflint> just wanted to see if it would come online without being in replicaset
[17:35:33] <sflint> bring down the other members of the replica set
[17:35:35] <sflint> all of them
[17:35:48] <sflint> then start with the rollback node first..stand it up
[17:35:56] <sflint> then arb
[17:36:01] <sflint> then 2nd arb
[17:36:08] <sflint> it should become primary
[17:36:37] <groundup> Okay, unique: true works. Now I get a duplicate key error when I do save() :(
[17:39:01] <groundup> hmm... update with upsert?
[17:41:18] <sflint> save() should be an upsert....
[17:41:31] <sflint> but yes you should be doing an upsert
[17:42:17] <sflint> what are you using to identify the document you want to update?
[17:42:47] <groundup> user, place, feature are the keys
[17:43:04] <groundup> value is what I want updated
[17:43:38] <sflint> what does you update statement look like?
[17:43:47] <groundup> It's in PHP
[17:43:53] <groundup> So, I will show you the calls.
[17:45:09] <groundup> The save() call is just $collection->save($vote); Where $vote is an array($user->getId(), $place->getId(), $feature, $vote);
[17:45:13] <groundup> Trying to do an upsert now instead.
[17:48:56] <groundup> sflint, http://pastebin.com/fCpZ8Vzc
[17:49:20] <groundup> With an upsert I am still getting the error
[17:51:31] <groundup> Here's the parameters: user: "2th7y", place: "AKdsJ", feature: "Good for kids", value: 1
[17:53:16] <jaraco> sflint - thanks for the help. I have our commercial support rep now.
[17:59:45] <sflint> jaraco....let me know what they advise
[18:01:52] <groundup> I'm using 2.4.3, so it has upsert. Searching for the answer on Google.
[18:08:23] <groundup> Switched back to save() from update w/ upsert. Shows the duplicate key error (now it doesn't show nulls, but the actual values).
[18:09:49] <jaraco> sflint: We needed to run db.replset.minvalid.remove() on the target host's local DB (in standalone mode) to force it to disregard the minimum valid timestamp. After doing that, we restarted it and the arbiter and the repl set came back online.
[18:11:14] <sflint> thanks
[18:23:20] <groundup> hmm... I think the PHP driver is adding _id
[18:24:19] <cheeser> i know the java driver will
[18:24:42] <groundup> Wouldn't that cause it to look for that _id?
[18:25:44] <groundup> yep - https://github.com/mongodb/mongo-php-driver/blob/master/collection.c#L1398
[18:27:00] <groundup> Err... read that wrong
[18:31:09] <groundup> Tried adding _id to vote{} but that didn't change anything.
[18:42:57] <groundup> This is around the point where I want to take my computer outside and put bullets in it
[18:43:46] <groundup> Her name is normally Adell, but I'm about to change it to Old Yeller
[19:02:05] <groundup> So save() isn't working but I finally got update() to work.
[19:11:25] <rpcesar> hello, I am running into this issue with an aggragate query in the group statement. paste for function here: http://pastie.org/8742971 , the error I am getting is ""A pipeline stage specification object must contain exactly one field.", but I don't understand the reason for this as it is similar to code I have written in the past
[19:11:57] <rpcesar> note that function just builds the pipeline array. removing the "group" statement works (well, with the expected duplicates due to not being grouped)
[19:16:19] <rpcesar> anyone able to explain this to me?
[19:16:30] <rpcesar> (or is anyone even online)?
[19:16:47] <rafaelhbarros> wait
[19:17:03] <rafaelhbarros> I need to read it, give me a moment rpcesar
[19:17:12] <rpcesar> np
[19:19:06] <rafaelhbarros> I believe that your _group should be inside an array
[19:19:09] <rafaelhbarros> which language is that?
[19:19:23] <rafaelhbarros> js?
[19:19:25] <rpcesar> lol, i think you figured it out. good find. and its nodejs
[19:19:27] <rpcesar> correct
[19:19:40] <rpcesar> that should probably be $group
[19:20:03] <rafaelhbarros> in python it would be group=[] something
[19:20:17] <rpcesar> woulden't that be $group = []?
[19:20:41] <rafaelhbarros> $ before any parameter/variable would rise syntax error in python specifically
[19:21:19] <rpcesar> ah, yea its been a while since using python.
[19:21:26] <rpcesar> that worked though
[19:21:49] <rpcesar> modified function for the record: http://pastie.org/8742996
[19:22:07] <rpcesar> thank you very much, was really having problems seeing it
[19:22:07] <rafaelhbarros> worked as intended?
[19:22:14] <rafaelhbarros> np rpcesar
[19:22:22] <rpcesar> yep. figured it was silly error but I was blind to it
[19:22:45] <rpcesar> thank you VERY much
[19:35:41] <Mmike> Hi, all. When I do rs.stepDown() on a primary, only connections made to that primary will be abandoned, right?
[20:08:21] <jaraco> !logs
[20:08:21] <pmxbot> http://chat-logs.dcpython.org/channel/mongodb
[23:04:47] <bcx> Hi there, I have a few questions. I am doing an query on an indexed field in a subdocument. The query consistently takes about 100ms.
[23:04:56] <joannac> okay?
[23:05:09] <rafaelhbarros> now ask the questions...
[23:05:19] <bcx> 100ms, seems way to slow
[23:05:37] <bcx> Are there common things I should check to understand why it's slow?
[23:05:53] <bcx> Is there a way to know what's going on here? { Site.users.analytics_id: "2f7d84ef4995da2f2c8d202217c9f9d622be76df" } ntoreturn:1 nscanned:1 nreturned:1 reslen:166 110ms
[23:06:13] <bcx> is what I see in the logs, it doesn't look like it's scanning more than 1 doc.
[23:06:36] <rafaelhbarros> what's the size of that document, if you don't mind me asking
[23:09:59] <bcx> Let me take a look.
[23:13:04] <joannac> run an explain() on it
[23:13:31] <bcx> rafaelhbarros: Doc size is 14K bson encoded
[23:13:45] <bcx> joannac, how do I run explain, I am kind of new to mongo
[23:13:51] <bcx> actually I'll read docs :-)
[23:19:07] <bcx> https://gist.github.com/jaminben/3c2c79f4cd907e4a5118
[23:19:20] <bcx> is what explain outputs, I don't see anything to weird, other than there's a multi key index
[23:19:29] <bcx> which I could imagine being slow
[23:49:36] <joannac> that says 0 millis