PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 8th of July, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[06:07:21] <dandv> I have ensured indexes on the fields ct and companyIdsDetected but db.content.find({ct: 'article', companyIdsDetected: {$exists: true}}).explain(); is still way slow (6+ seconds)
[06:08:23] <dandv> I have a composite index on the two as well. What's going on?
[06:09:11] <joannac> run an explain()
[06:09:17] <joannac> wait, you did
[06:09:22] <joannac> is it using the index?
[06:10:31] <dandv> Only the ct index: http://pastebin.com/THfydywD
[06:11:06] <joannac> what version?
[06:11:13] <joannac> oh, 2.4.10
[06:11:31] <joannac> explain(true)?
[06:12:58] <joannac> also, I think you'd get better results from indexing on {ct:1, companyIdsDetected:1}
[06:13:21] <joannac> also, explain(true) and hint the index you want it to pick
[06:15:36] <dandv> I believe I already have that index? Line 93
[06:16:32] <dandv> explain(true) is at http://pastebin.com/GETyNRAG
[06:16:55] <dandv> Is there some more user-friendly interpreter of this explain output? Can't do much with it as it is.
[06:17:21] <joannac> nope, you have the fields in the opposite order
[06:17:30] <joannac> explain(true) while hinting the index?
[06:17:48] <joannac> it's not even considering that index when evaluating query plans...
[06:18:37] <dandv> does the order of the fields in an index matter? I created that index like this: db.content.ensureIndex({ct: 1, companyIdsDetected: 1});
[06:18:57] <joannac> yes it does
[06:19:11] <joannac> http://docs.mongodb.org/manual/core/index-compound/
[06:19:55] <dandv> Same deal with hint: http://pastebin.com/wHbLNkqU
[06:20:51] <joannac> "nscanned": 93294,
[06:20:57] <joannac> it scans more docs
[06:24:19] <joannac> try {ct:1, companyIdsDetected: 1}
[06:24:35] <joannac> it should then find the ct: "article" bucket, and then read from there
[06:26:47] <dandv> dropped all indexes, created the compound one again, ran hint & explain true - nope. http://pastebin.com/cRihKqBH
[06:28:37] <joannac> getIndexes()?
[06:29:22] <joannac> are you using something other than the mongo shell?
[06:29:40] <joannac> it doesn't look like your index is preserving field order
[06:30:03] <dandv> I use mongohacker on top of the shell
[06:31:02] <dandv> Added http://pastebin.com/g73QXFDL at the end, the index keys look sorted!
[06:31:31] <joannac> yeah
[06:31:34] <joannac> what
[06:31:57] <dandv> Like, { "key": { "companyIdsDetected": 1, "ct": 1 }, instead of ct: 1 first
[06:32:25] <joannac> yeah, i see it
[06:32:34] <joannac> the "what" was directed at mongohacker
[06:32:39] <joannac> :(
[06:33:13] <joannac> although...
[06:33:36] <dandv> I'm about to file a bug with mongohacker. Although?
[06:33:40] <joannac> i dunno, i didn't think mongohacker got in the way quite so much
[06:36:31] <dandv> probably not, since it's likely a display issue (I do hope it hasn't sorted the keys on creation!)
[06:38:36] <dandv> bug filed, https://github.com/TylerBrock/mongo-hacker/issues/91
[06:40:51] <joannac> sort_keys: true, // sort the keys in documents when displayed
[06:40:59] <joannac> pretty sure it's dsiplay
[06:42:46] <dandv> it is, I tried mongo --norc
[06:42:59] <dandv> Do we have a mongo index usage bug then? Maybe for $exists?
[06:44:11] <joannac> how many results do you get just for db.content.find({ct: 'article'})
[06:45:49] <dandv> we apparently do have that bug. db.content.find({ct: 'article', companyIdsDetected: {$in: ['4wti8J4enxGWwL68u']}}).hint({ct: 1, companyIdsDetected: 1}).explain(true); is instant
[06:46:03] <dandv> 93233
[06:47:15] <joannac> hmm
[06:47:16] <dandv> Yup... https://jira.mongodb.org/browse/SERVER-10608?jql=text%20~%20%22index%20%24exists%22
[06:47:20] <dandv> super annoying
[06:47:50] <dandv> "This fix was made to the new query framework introduced in 2.6." and I' on 2.4
[06:57:26] <nonrecursive> Is there an alternative to querying nested documents using the dot notation? I'm trying to build queries programmatically and the dot notation seems to require string formatting (which seems strange).
[06:57:51] <joannac> nonrecursive: not really
[06:58:39] <nonrecursive> joannac: thank you
[07:26:51] <nfroidure_> i'm wondering if it is possile to aggregate a subset of a collection. Can i do find() then aggregate() in a single query ?
[07:28:55] <joannac> $match?
[07:29:36] <nfroidure_> it seems that match operate on the aggrgated datas
[07:29:48] <nfroidure_> i'd likie to filter datas before their aggregation
[07:30:08] <rspijker> …
[07:30:21] <rspijker> how is $match as a first step different than first find then aggregate?
[07:31:44] <nfroidure_> rspijker, you mean match placed before group will operate before the aggregation ?
[07:31:52] <rspijker> yes
[07:32:03] <nfroidure_> in this cas, how can i use match after also ?
[07:32:05] <rspijker> all of the operators work in order
[07:32:11] <rspijker> you can use $match as often as you want
[07:32:25] <rspijker> that’s kind of the point of the aggregation pipeline
[07:32:33] <rspijker> pipeline being the important word there
[07:32:54] <nfroidure_> ok, good to know
[07:34:07] <nfroidure_> i haven't see there were an array form for aggregation
[07:34:53] <nfroidure_> do you know were i can find real wrld example with mongoose? I find the doc a bit useless
[07:35:26] <rspijker> Sorry, I’m not familiar with mongoose _at all_
[07:35:47] <rspijker> mongodb has some examples here: http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set/
[07:36:40] <nfroidure_> yep, in fact, we choose mongoosebut i'm continously looking at the mongodb doc :)
[07:43:58] <dandv> thanks joannac
[07:44:17] <dandv> Joanna d'Arc?
[07:44:27] <dandv> If you're a woman, happy too see more women in computing.
[07:51:37] <nfroidure_> ya noob question. how to perform a count of distinct field values in an aggregate?
[07:51:47] <nfroidure_> The following doesn't work: db.users.aggregate([{$match: { $text: {$search: 'Nicolas'}}},{$group:{_id:'firstname', $count: { $distinct: 'firstname'}}}]);
[07:52:47] <nfroidure_> s/$distinct: 'firstname'/$distinct: 'lastname'/
[07:57:04] <joannac> I didn't think there was a $distinct aggregation operator?
[08:00:53] <torgeir> Can a db.collection.update() with { multi: true } not update _several_ fields of all documents using $set in one query on mongodb 2.4.8?
[08:01:40] <torgeir> does it have to be one field, in the $set: { here: 1 }, when there's a multi: true?
[08:02:20] <nfroidure_> i found the answer here http://stackoverflow.com/questions/18501064/mongodb-aggregation-counting-distinct-fields
[08:02:35] <nfroidure_> the distinct command messed me out
[08:04:21] <salty-horse> hey. 2.6 has a mergeChunks command to merge/remove empty chunks. If I have 2.4, is there a way to remove empty chunks?
[09:04:29] <ManicQin> Hello everybody , I'm getting "exception: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in." even when I pass allowDiskUse:true.
[09:04:54] <ManicQin> my shell is latest 2.6.3
[09:07:04] <ManicQin> http://pastebin.com/52KWA6rC
[09:11:21] <rspijker> why are you sorting before grouping?
[09:11:45] <rspijker> ManicQin: ^^
[09:17:45] <rspijker> torgeir: you can just put multiple fields in the $set. $set:{field1:”x”, field2:”y”} will work fine. Also with multi:true. multi does not indicate updating multiple fields though, it indicates updaint multiple documents. The first field of an update is the query part. If you don’t specify multi:true only a single document matching the query will be updated. If you specify multi:true, all documents matching the query will be
[09:17:45] <rspijker> updated
[09:17:52] <ManicQin> Hey, no reason. it reproduces withou sort too
[09:18:44] <torgeir> yes, thanks
[09:19:16] <torgeir> got it working
[09:23:02] <rspijker> ManicQin: that’s kinda weird...
[09:23:31] <rspijker> are you sure you;re getting the exact same error when you use allowDiskUse:true ?
[09:28:20] <ManicQin> rspijker: http://pastebin.com/nSyZpwLw
[09:31:30] <ManicQin> rspijker: it's a collection of ~600,000 documents, and the fields that I $push are basically all there is. I made heavier queries (I think)
[09:33:15] <rspijker> maybe it’s the wrapper function somehow...
[09:34:05] <rspijker> could you try it like this: http://pastebin.com/vVWvAXvx
[09:34:18] <ManicQin> rspijker: I'm running it in shell ... (robomongo to be exact)
[09:34:46] <rspijker> hmmmm… not sure robomongo has updated their shell to be fully 2.6 compliant
[09:35:04] <rspijker> and I think passing the allowDiskUse like that to the helper was introduced in 2.6
[09:35:08] <rspijker> so it might be that
[09:35:14] <rspijker> try the runcommand version I pasted
[09:35:28] <rspijker> if you still have issues, try using a real 2.6 shell instead of the robomongo one
[09:36:28] <rspijker> yeah… Have a look at the first comment here: https://github.com/paralect/robomongo/issues/520
[09:37:15] <rspijker> first reference I mean, not comment
[09:42:45] <ManicQin> Ok, yes it worked now. And thank you for linking me to the github
[09:42:53] <ManicQin> Thank you very much
[09:47:07] <rspijker> no problem
[11:10:40] <Left_Turn> whats MongoDBs package name for linux?
[11:11:06] <kees_> what flavor of linux?
[11:11:19] <Left_Turn> kees_, opensuse
[11:12:46] <kees_> i have no experience with suse, but 'opensuse mongo' in google seems to give enough information
[11:13:01] <Left_Turn> ok thanks
[11:15:58] <Left_Turn> youre right kees_ .. got it on 1st link
[11:18:26] <rspijker> there are multiple packages out there Left_Turn. The “old” (<= 2.4) ones are marked with 10gen-… and the new ones (2.6+) are mongodb.org
[11:18:32] <rspijker> so look carefully
[11:19:15] <Left_Turn> oh ok rspijker
[11:59:48] <cek> test.
[12:20:22] <Left_Turn> anyone familiar with deployd ? i cant get the dashboard up.. I doubt this is relevant to mongodb
[12:22:44] <bensons> hi there, we are using a sharded cluster and want to backup the data. i know we can stop balancer and dump each individual shard, but whats bad about dumping directly via mongos? i havent found anything about that...
[12:36:40] <adamcom> no real difference to dumping each one, except that it will take longer - mongodump will walk the _id index to do its dump, single threaded, so it will only hit one shard at a time, effectively dumping them serially
[12:37:43] <adamcom> generally far quicker to use something else (filesystem snapshot, or similar) for backups, unless you have quite a small data set
[12:38:06] <adamcom> or, of course, MMS Backup - but you have to pay for that
[12:38:11] <bensons> :)
[12:39:03] <bensons> adamcom: yes snapshots would be an idea but for that i would need to shutdown the instance in order to be consistent(?) and to do so i will need to change the write concern inside my app and so on and so forth... so mongos is the smallest hassle
[12:39:22] <bensons> ok it might not be 100% consistent, but thats ok for us..
[12:40:59] <adamcom> ignoring the cluster wide consistency point (only really possible with MMS backup or if you stop writes) - at the shard/replica set level: as long as your snapshot is point-in-time, and includes the journal, there are no special steps needed
[12:42:06] <adamcom> it's basically the equivalent of restarting after a crash (journal will be replayed for consistency)
[12:42:32] <bensons> adamcom: and from available linux filesystems/volume managers - the only one able to do snapshots is lvm and thats at least from my experience not 100%
[12:43:01] <bensons> zfs would be neat but for that i would need to migrate to slowlaris or fbsd :)
[12:43:19] <bensons> and placing mongo db on a backend fc storage is out of scope for us..
[12:44:49] <adamcom> you can take extra steps to make things less susceptible to errors - do the snapshot on a secondary - shut down or fsyncLonck that secondary (one in each shard at the same time) and then snapshot that - to be even more paranoid about it you could combine it with xfs_freeze or similar - it's really about how far you want to go to guarantee a good snapshot
[12:45:19] <adamcom> I've seen the LVM snapshot (and EBS, and NetApp etc.) work as long as they included the journal
[12:46:29] <adamcom> the real issue is getting cross-cluster consistency because you need a backup from multiple shards, and you need a corresponding backup of the cluster meta data from the config servers
[12:47:58] <adamcom> and, of course, the key to any backup solution is to regularly do restores and make sure they work, even when not needed - periodically re-seeding a secondary from backup snapshots within the oplog window would be my recommendation if using snapshots
[12:48:41] <adamcom> re-seed, make sure it catches up, compare data with other nodes for consistency
[12:55:10] <bensons> ok thanks adamcom, think we will stick to mongos backup for the beginning.. should be enough for us, but thanks a lot for your thoughts :)
[12:58:41] <adamcom> oh, one last thing - whether you are using mongodump directly, or via mongos (assuming you have secondaries - it defaults to seconary reads) then you are likely to dump orphaned documents with the mongodump approach
[12:59:07] <adamcom> so, you may want to look into http://docs.mongodb.org/manual/reference/command/cleanupOrphaned/
[13:08:19] <bensons> adamcom: hm i dont get it 100%. when i dump + restore via mongos (assume sharding is already enabled with the downside of pretty slow restore speed), how can i get orphaned documents or to be more precise, how should orphaned documents end up on my shards, as the balancer during restore already balanced them?
[13:09:06] <bensons> i know this is far away from being optimal especially because of the slow restore speed, but the backup should be something like a last resort for us anyway...
[13:13:00] <rspijker> as long as you can guarantee that your dump process completes before the oplog window runs out
[13:13:25] <rspijker> that was an issue for us, so we went with FS snapshots
[13:14:58] <bensons> rspijker: so you did a restore while data was still inserted?
[13:15:34] <bensons> because as mentioned, for us a restore would just take place in case everything is already somewhat los and the app is stopped + no data will be modified during the restore
[13:15:45] <bensons> *lost
[13:17:29] <rspijker> bensons: the dump, not the restore
[13:17:44] <rspijker> o, wait, you propose to dump from the mongos…
[13:17:53] <bensons> yeah right...
[13:17:57] <rspijker> that will lock your DB during the length of the dump :/
[13:18:16] <bensons> mongodump via mongos will lock my db?
[13:20:13] <rspijker> let me check into it… I know we decided against it for some reason, now I’m no longer 100% sure...
[13:21:06] <bensons> rspijker: would be nice :) i did not (yet) experience any db locks and we do have dbs > 50gb that take quite a while to get backed up...
[13:24:18] <rspijker> bensons: my mistake, it’s not locking. Which makes sense, since it’s basically just a query
[13:25:03] <bensons> rspijker: yeah :) it is not consistent or lets say - far away from being consistent but at least better than nothing..
[13:25:24] <rspijker> you can get fairly close with --oplog
[13:25:51] <rspijker> the problem is, syncing them across shards and making sure they are consistent amongst each other
[13:48:31] <salty-horse> can a 2.4 mongos connect to a 2.6 database?
[13:51:20] <saml> mongos or mongo shell?
[13:51:34] <saml> it's better to keep versions the same. i got segfault
[13:54:10] <rspijker> I recall there being a warning in the upgrade docs that said not to upgrade any of the mongod instances before all of the mongos instances were upgraded
[13:54:25] <rspijker> so I’d guess there could be some issues in connecting a 2.6 mongos to a <2.6 mongod
[14:09:34] <ep1032> I have a c# windows service that is connecting to my mongo instance. I have a log file with thousands of instances of the error message: Unable to connect in the specified timeframe of '00:00:00'.
[14:09:47] <ep1032> I found the exact driver code that is throwing the exception
[14:09:47] <ep1032> https://searchcode.com/codesearch/view/26539632/
[14:10:04] <ep1032> if you ctrl+f that page for the string "specified timeframe"
[14:10:25] <ep1032> but have no idea why my service will run fine for a few hours, and then suddenly just start throwing that message every following time
[14:10:31] <ep1032> help please?
[14:11:44] <ep1032> is there a way to connect to mongo without using DiscoveringMongoServerProxy ?
[14:11:48] <ep1032> whatever that is?
[14:15:17] <ep1032> anyone?
[14:16:10] <cheeser> probably should try the mongodb user list
[14:16:15] <ep1032> where is that?
[14:16:31] <ep1032> what is that? lol
[14:16:33] <cheeser> https://groups.google.com/forum/#!forum/mongodb-user
[14:16:41] <cheeser> you've heard of mailing lists, yes?
[14:17:06] <ep1032> okay, I'll give it a shot
[14:17:08] <ep1032> thank you
[14:26:04] <darkchanter> hi
[14:33:42] <salty-horse> saml, rspijker, thanks
[15:14:39] <Nostalgeek> Hello. I just apt-get'ed mongodb on Ubuntu 14.04. Authentication seems to be turned off by default allowing for admin login without password. How do I secure my MongoDB by disabling this? I already created a new user with db.addUser but I want to disable / set an admin password?
[15:15:08] <cheeser> i think localhost might always be allowed in some cases.
[15:15:33] <Nostalgeek> cheeser, Humm good hint. Yeah, I'm logging in locally.
[15:28:37] <adamcom> once you add a user in later versions the localhost exception is disabled
[15:29:08] <adamcom> so, always add a userAdmin first, or you can end up with no permissions to add more
[15:29:44] <adamcom> and, you can explicitly disable the localhost exception, if you don't mind potentially locking yourself out and having to restart the process if you do :)
[15:30:30] <adamcom> this tutorial walks it through well: http://docs.mongodb.org/manual/tutorial/enable-authentication/
[15:32:14] <adamcom> cheeser: just make sure you've added and are using the MongoDB packages rather than the Ubuntu ones or you will be on 2.4 forever (or until you do a distro upgrade from 14.04)
[15:32:43] <cheeser> i *do* need to upgrade actually
[15:32:51] <cheeser> i'll wait until i'm back from my trip, though.
[15:37:20] <Nostalgeek> cheeser, adamcom: I did create a user but was still able to login without authentication. Could it be that by default auth=false in mongodb.conf (at least under Ubuntu 14.04 MongoDB 2.4). I changed auth=yes in mongodb.conf and restarted Mongo and it seems to be working now.
[15:37:35] <cheeser> sounds like a winner
[15:37:46] <Nostalgeek> i mean auth=true of course
[16:42:31] <uehtesham90> hello, i wanted to know what is the best way to migrate a mongodb database from one server to another?? i have looked at two options: 1) using mongidump and mongorestores 2) copying the database files from dpath directory e.g. from /data/db and pasting them into the datapath in the new server
[17:19:39] <netQt> @uehtesham90 it's better to use mongodump and then use mongorestore
[17:30:15] <zhodge> evaluating options for a log store, and mongo has the plus of storing documents, which would be very convenient for my app (node.js)
[17:30:53] <zhodge> but I’m not savvy enough with my database technical knowledge to determine whether a lot of the Internet opinions discouraging mongo are worth heeding or not
[17:31:05] <cheeser> most of them are not.
[17:31:18] <cheeser> most are either old or they have an axe to grind
[17:31:59] <zhodge> it’d be nice to be able to use SQL to query against a few log tables, but setting those up seems unnecessary compared to just tossing a few objects at a collection
[17:32:10] <zhodge> cheeser: that’s what it can feel like yeah
[17:32:59] <zhodge> I’m already running a postgres server and I thought about using that for logging but that seemed like a terrible idea since the burden of serving app data and handling log writes would be shared by one server
[17:35:03] <cheeser> well, writes will go to one mongod, too, unless you shard.
[17:36:19] <zhodge> though it would avoid burdening the primary db
[17:40:57] <rspijker> zhodge: what kind of volume are we talking about here?
[17:58:41] <mango_> Hi, I'm starting MongoDB training next week M102 and M202
[17:58:54] <daidoji> mango_: nice
[17:58:57] <mango_> I'm just firing up a 20GB CentOS VM with MongoDB installed, is that enough?
[17:59:37] <mango_> I'm going to try it with CentOS 7
[17:59:52] <cheeser> that'll be plenty
[18:00:18] <mango_> cheeser: I did try with 2.2GB remaining space, but mongod failed to start
[18:00:29] <mango_> something about the journal needing more space.
[18:02:59] <obiwahn> Riobe: usb works?
[18:03:06] <cheeser> use --smallfiles
[18:12:37] <mango_> cheeser: yes it did suggest that, but wasn't sure how to use --small files with the service .. command
[18:14:46] <zhodge> rspijker: 0 as of now ;)
[18:15:15] <zhodge> more seriously, I’m not too sure as of yet, but my goal is to have a reasonable logging solution for a relatively small site in terms of traffic
[18:17:37] <Riobe> obiwahn, Maybe? I tried a fix last night to grub stuff that has helped some people that have my motherboard. .
[18:18:00] <rspijker> zhodge: if the volume isn’t huge. Which it doesn’t sound like it is, it;s not going to matter that much… Just pick what you are most familiar and comfortable with.
[18:18:02] <Riobe> obiwahn, But I won't know if it worked till it doesn't break for a while. So fingers crossed. Thanks for remembering and asking. :D That's awesome.
[18:18:17] <rspijker> Unless you have an ulterioir motive, like wanting to learn mongo :)
[18:19:15] <zhodge> rspijker: always looking to learn new things and mongo isn’t exempt from being an eligible candidate ;)
[18:20:10] <rspijker> mango_: M202 has some exercises with sharded clusters, they can require a bit of free space. I did it with a 20GB ubuntu VM and that was usually fine. Had to remove some of the older weeks a few times, though
[18:20:55] <rspijker> zhodge: then mongo is as good a choice as any for this :)
[18:22:34] <mango_> rspijker:thanks for the info, ok, I'll see if I increase the space just to avoid cleaning space.
[18:23:26] <zhodge> rspijker: true that :)
[18:23:51] <zhodge> rspijker: any warnings regarding mongo and how it handles its data?
[18:24:30] <rspijker> if you’re used to SQL, get to grips with the fact that you generally replace joins with embedding
[18:24:50] <rspijker> and that you ‘have to’ organize referential integrity in your app layer
[18:27:36] <zhodge> okay, that makes sense
[18:27:46] <zhodge> collections are generally self-contained too right?
[18:27:51] <zhodge> as in they don’t reference other collections
[18:28:35] <rspijker> they can, but there is no join concept in the DB
[18:30:29] <rspijker> embedding is not always feasible
[18:31:10] <zhodge> is querying usually seen as a pain point or a positive?
[18:31:46] <zhodge> at first glance it seems nice not to have to work with a query language that’s not understood natively by app language
[18:32:00] <zhodge> but sql is pretty powerful so I could see how that benefit might be overrated
[18:32:02] <rspijker> I’m fairly happy with it, but it’s all due to good schema design
[18:33:43] <cheeser> mango_: in /etc/mongod.conf
[18:34:11] <mango_> cheeser: ok thanks, I'm starting from scratch now anyway with a larger disk drive
[18:34:35] <zhodge> rspijker: which foregos the benefit of “schemaless” right? haha
[18:35:06] <rspijker> the benefit of schemaless is that different documents can have different fields in the same collection
[18:35:29] <zhodge> why yes that seems a lot more reasonable
[18:35:58] <zhodge> that’s a familiar pain point coming from relational considering that eventually one fat table with a lot of nullable fields gets a bit much
[18:36:17] <rspijker> still important to think about how you organize your data though
[18:36:17] <rspijker> standard example is a blog with comments on each post
[18:36:31] <rspijker> in sql you’d have a post and comment table and they’d be joined by ids
[18:36:38] <cheeser> you wouldn't embed comments on a post in either relational or mongo.
[18:36:53] <rspijker> why not cheeser?
[18:37:20] <mango_> rspijker: collection = table? in relation world?
[18:37:31] <cheeser> document size constraints. documents moves when they grow. etc.
[18:39:06] <rspijker> 16MB… that’s a LOT of comments. As far as the moving goes, it really depends on your access patterns whether that’s worse than having to do multiple queries to get some info
[18:39:27] <rspijker> mango_: yeah, close enoguh at least
[18:40:23] <mango_> rspijker: ok, curious to see that in practice.
[18:40:40] <cheeser> rspijker: that 16M includes the post and all its metadata, the fieldnames for each and then all the comment documents with all their metadata and fieldnames.
[18:40:53] <rspijker> cheeser: I know
[18:41:19] <cheeser> you'll hit it sooner than you think. i've worked on a large CMS and it's not that uncommon to have hundreds of comments on a popular blog site.
[18:46:44] <rspijker> We used to keep a history on some entities in our DB, not even a log of the full document, but only who changed it, and when did they change it. We hit the limit with that. So I realize that there are limits and that they can be hit quicker than you might think. The comment example is not that bad though…
[18:49:19] <rspijker> If we set aside 1MB for the post. Which is a lot for plain text, even including metadata. You can still have 1000 comments of 15000 characters each, which should be plenty for any comment
[18:50:15] <cheeser> well, freel free to go that route. it's suboptimal at the least.
[19:03:21] <kali> mango !
[19:09:05] <netQt> hi guys, i have a data loss problem by using replica sets. at some point i have to make my primary to step down, and it takes some time to select new primary, and that where i lose data
[19:09:13] <netQt> any ideas how to avoid it?
[19:09:20] <rspijker> kali: like the deserts miss the rain?
[19:09:35] <cheeser> netQt: implement retries in your app
[19:10:06] <netQt> hmm, i wanted to avoid that, but if there is no other way
[19:10:33] <kali> rspijker: can you take the color blue out of the sky ?
[19:10:59] <netQt> can't i force to find new candidate for primary and only after that demote the current one?
[19:11:25] <netQt> because now primary demotes itself to secondary and then starts election process
[19:11:31] <kali> poor chris kattan
[19:11:46] <kali> nobody remembers :/
[19:11:52] <cheeser> netQt: until the election is over, there is no primary to write to
[19:12:05] <cheeser> kali: i do. but i actively try not to.
[19:12:27] <rspijker> kali: great song, not such an SNL fan
[19:14:03] <netQt> @cheeser thanks, so the way to go is to retry whenever primary is not available
[19:14:09] <cheeser> yeah
[19:14:16] <cheeser> when it makes sense to
[19:18:25] <kali> ho my... poor guy. he is reduced to cameo appearance in seth meyers' show... and ads
[19:19:05] <kali> and DUI arrests.
[19:19:27] <cheeser> which is about the limit of his ability
[19:20:13] <kali> come on... not even Mr Peepers ?
[22:14:16] <yutong> i think there's a leak in the node mongo app
[22:14:20] <yutong> if i disconnect the wire
[22:14:30] <yutong> i see my memory use spike up by like 700KN
[22:14:32] <yutong> 700KB
[22:14:33] <yutong> each time*
[23:42:33] <Pulpie> hello
[23:43:18] <Pulpie> in my database I have a document that contaions a hash filled with data
[23:43:45] <Pulpie> lets call this hash "poptart": { "stuff": 1}
[23:44:00] <Pulpie> how could I make sure that I found poptart in a query?
[23:44:52] <Pulpie> I thought it would just be find({"poptart" => Hash.new()}) but that doesn't seem to be it
[23:46:19] <Pulpie> probably looking for a blank hash instead of a populated one
[23:58:22] <Pulpie> ahh $exists is what I wanted