PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 20th of August, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:15:22] <appleboy> has anyone ran into an issue with auth enabled where you can authenticate via code or command line but not with mongovue or robomongo? both those give invalid credentials, but they creds are copy/pasted in
[00:23:13] <joannac> appleboy: v3.0.x of mongodb?
[00:23:16] <appleboy> yeah
[00:23:25] <appleboy> 3.0.5
[00:23:29] <joannac> the tools you're using probably don't support scram-sha-1
[00:23:35] <appleboy> :(
[00:23:42] <appleboy> know of any tools out that do?
[00:23:54] <joannac> there's a list of tools in the docs
[00:24:16] <joannac> other than the ones written by mongodb, I don't :(
[00:24:17] <appleboy> those two tools are both from the docs. wasn’t sure if you knew which ones actually support scram-sha-1
[00:24:41] <joannac> I don't, sorry. they're third party and I don't keep track of them
[02:19:09] <samsamsams> heya all
[02:20:40] <samsamsams> hey all i am trying to have a new node join an existing replica set. when i do rs.add(<addr>) i get some weird output on my new machine: https://gist.github.com/samuraisam/c7ff7f14b49e1b710891
[02:21:11] <samsamsams> ultimately it never joins with "annot find self in new replica set configuration; I must be removed; NodeNotFound No host described in new configuration 11 for replica set gReplSetWest10 maps to this node"
[03:58:07] <hemmi> Hey there. I'm building an ad platform that has a notion of publishers, advertisers, sites, zones, creatives, campaigns and flights. My app won't actually be serving any ads because I'm just configuring a third-party ad network. This seems like a pretty good fit for a more traditional (rdbms) database. I am however interested in mongodb for its flexible
[03:58:07] <hemmi> schema for quicker development. Is there any reason against using mongo for something like this?
[05:02:37] <oromongo> Hello, simple question from a beginner: Can I add an element at a specific index? In my case I want to insert a big number of lines and add them at the right place by alphabetical order
[05:03:30] <oromongo> without keeping all the database in memory during the insert, of course
[07:44:37] <parallel21> When creating an index in mongodb
[07:44:42] <parallel21> How does one search on that index
[07:45:10] <parallel21> Or does that happen automatically?
[07:45:45] <parallel21> so when searching on a field that is not indexed, it will search the entire collection
[07:45:55] <parallel21> Where as if it were indexed, it would search the index?
[07:46:10] <rkgarcia> the index used it's automatic
[07:46:28] <rkgarcia> you need an index to search faster
[07:47:12] <MadWasp> Hello guys, I have a collection of contacts and I need to give every single contact a hash. This should be a sha1 out of the contact’s email and a secret field of another entity referenced with DbRef. Is there a command I can use for that or will I have to write a migration in some language?
[07:49:13] <rkgarcia> MadWasp, mongodb cli haven't sha1 function or i don't know
[07:49:25] <MadWasp> that
[07:49:33] <rkgarcia> you need a little migration in any language
[07:49:36] <MadWasp> ok
[09:06:06] <Kosch> hey guys. I need to build own rpms with ssl support, I compiled the 2.6.11 using scons on centos6 (scons -j 2 --64 --ssl all) and I'm wondering why the created executables are pretty big. (~350MB) Do I missed something?
[09:53:10] <m4k> How do I store data into redis after fetching it from mongo ? I'm using pymongo
[10:07:58] <Derick> m4k: better to ask in a redis channel
[12:22:53] <ggoodman> I have an existing replica set cluster running behind a firewall and would like to add authentication + authorization so that I can connect from machines not on the same private network.
[12:24:29] <ggoodman> I understand that the first step is to add root / admin users, then add a common keyFile to each server.
[12:24:49] <ggoodman> Is there a procedure where this can be done without downtime?
[12:27:17] <deathanchor> ggoodman: you have to have downtime because you have to turn on the auth option on the mongod and add the authentication to your client apps
[12:28:53] <ggoodman> deathanchor: thanks. Can you clarify if adding root and admin users will automatically propagate across the cluster and whether this will 'turn on' the requirement for client authentication?
[12:29:09] <deathanchor> ggoodman: I don't know if this would work, but add auth to your client apps, add users to db, restart secondary with auth, stepdown primary, restart primary with auth.
[12:29:42] <deathanchor> ggoodman: no auth isn't enforced until you turn on the auth option via commandline/conf file
[12:30:16] <ggoodman> It occurs to me that perhaps adding the keyFile will require down-time, what do you think?
[12:30:23] <deathanchor> I would test it out on a simple replicaset before doing anything in production
[12:30:26] <deathanchor> dry run the process
[12:31:24] <deathanchor> ggoodman: I'm not sure about the SSL options
[12:32:05] <ggoodman> Haha, you suggesting no cowboy ops?
[12:32:49] <deathanchor> ggoodman: I'm only cowboy about something I'm absolutely sure about.
[12:32:59] <deathanchor> hence the dry run before cowboying around prod.
[12:33:26] <ggoodman> Wise words. Thanks for the tips.
[12:55:32] <seiyria> hey all, I'm working with nodejs and I have a really long running operation. I'm lazyloading my data from mongo via a cursor, but the cursor gets exhausted after processing about 17k items (after about an hour or so of being open)
[12:55:47] <seiyria> there's about a million items so that's kindof a problem
[12:55:49] <seiyria> any ideas what I can do?
[12:57:56] <jamiel> Use noTimeout for long running scripts: http://docs.mongodb.org/manual/core/cursors/#closure-of-inactive-cursors
[12:58:44] <seiyria> that's what I was looking at too, but I can't seem to figure out how to set it in nodejs.
[12:58:55] <seiyria> I also wanted to mention that before the cursor gets exhausted, I see "connection to host was destroyed"
[13:02:30] <seiyria> ah, I think I found the option, I'll try that out
[13:05:54] <seiyria> anyhow, thanks. hopefully that works out better.
[13:51:26] <seiyria> so, I have my cursor set to never timeout, but after a while, I get "connection to host was destroyed" after 6k items exactly.
[13:51:51] <seiyria> unsurprisingly, I can't find any resources on that error message
[13:55:18] <d-snp> maybe its your driver?
[13:55:23] <d-snp> what client do you use?
[13:55:25] <seiyria> nodejs
[13:55:53] <d-snp> have you 'ack'ed for that message in its source?
[13:56:47] <seiyria> d-snp, no
[14:01:41] <seiyria> d-snp, it just surprises me that there are no stackoverflow, etc resources for this particular error
[14:01:49] <seiyria> but this isn't the first time I've run into that with mongo
[14:04:01] <d-snp> talking about errors with little to no stackoverflow results
[14:04:24] <d-snp> conn199330] will not perform auto-split because config servers are inconsistent
[14:04:28] <d-snp> how screwed am I?
[14:04:44] <d-snp> servers mongo-main-config-1:27019 and mongo-main-config-2:27019 differ
[14:04:48] <seiyria> on a scale of one to screwed.. better get the screwdriver
[14:05:24] <d-snp> could not verify that config servers are in sync :: caused by :: config servers mongo-main-config-1:27019 and mongo-main-config-2:27019 differ: { chunks: "f7cb87489701a0d48d7937a3fc81346e", collections: "56d343775451318c30204f93012b94a8", databases: "ad4d1c6e39fc63c2c8cfdfd4b28d3f50", shards: "7e5d93853282128782a8c0d2aaf2436b", version: "7f88f40e752a34474bd18e3ac6db8371" } vs { chunks: "b052fdc0049db74960b6c475bde18879", collections: "56d343775451318c30204f93012b94
[14:06:28] <d-snp> hmm only the chunks differ
[14:06:51] <d-snp> I mean 'only' the chunks differ...
[14:25:50] <d-snp> ok the fix is relatively simple
[14:25:56] <d-snp> it happened before according to my colleague
[14:26:04] <d-snp> doesn't exactly inspire confidence
[14:40:24] <Lonesoldier728> hey mongodb-ians I wanted to implement redis as a mem cache kind of deal to avoid constant same queries on mongo
[14:40:43] <Lonesoldier728> has anyone ever implement the two together or dealt with redis, trying to figure out if it makes sense
[14:41:17] <cheeser> at a previous gig we did that with couch
[14:41:45] <cheeser> but it was a CMS and we did some complex combinations of documents returned for a web request
[14:41:49] <StephenLynx> IMO it doesn't makes too much sense
[14:41:55] <StephenLynx> because mongo has it's own memcache.
[14:42:03] <cheeser> if you're just caching the untouched docs, it doesn't make much sense.
[14:43:45] <Lonesoldier728> for example this is what I thought I would use it for or does mongo do this already automatically and I just had no clue... lets say a person hits the home page, then mongo grabs the most recent 50 items, and if someone else hits it, same 50 items, figured just put the 50 items on redis and just grab them from there
[14:44:14] <StephenLynx> these 50 items would probably already be on mongo's RAM
[14:44:23] <StephenLynx> but I am not 100% sure on that.
[14:44:39] <cheeser> what if those 50 changed?
[14:45:45] <Lonesoldier728> They cannot change
[14:45:58] <StephenLynx> eventually they do
[14:46:12] <Lonesoldier728> oh you mean like there is a new set
[14:46:18] <Lonesoldier728> well I can have them expire daily
[14:46:20] <Lonesoldier728> or something
[14:46:22] <d-snp> what kind of caching strategy does mongodb use? I can't find any config on customizing it
[14:46:50] <d-snp> the old style just used the kernel's algorithm right?
[14:48:07] <Lonesoldier728> Should I avoid redis for now
[14:48:31] <StephenLynx> I think so.
[14:48:36] <Lonesoldier728> is it something that is more critical for once you need to scale
[14:48:38] <d-snp> Lonesoldier728: no reason to avoid redis, depends on your measured performance
[14:48:43] <StephenLynx> you are adding unnecessary overhead.
[14:48:55] <Lonesoldier728> I guess I can tackle it later
[14:49:02] <StephenLynx> yeah, KIS
[14:49:24] <d-snp> if your entire dataset fits in RAM using mongodb is probably alright
[14:49:34] <StephenLynx> I know at least GothAlice used to use a cache software like redis and then ditched it once implemented mongo on that scenario
[14:50:31] <Lonesoldier728> another question, I am working with android/iphone apps and followers/likes are something that can be done within the app, everytime a user does a follow/unfollow and like/unlike I am currently hitting the servers right away and updating it... is it taxing in the sense that it is better to cache it on the user's client side (sqlite) then after a certain time or something send them all at once... kind of confused if it is simila
[14:50:32] <Lonesoldier728> r to being an issue only when it comes to scaling
[14:51:40] <d-snp> Lonesoldier728: doing them in batches is more complex, what happens if your app exits before the changes are sent?
[14:51:45] <d-snp> battery runs out or whatever
[14:51:57] <StephenLynx> I don't think is that taxing.
[14:52:13] <StephenLynx> social media systems are not write-intensive.
[14:52:21] <StephenLynx> IMO
[14:52:26] <d-snp> true
[14:52:54] <StephenLynx> but anyway
[14:53:00] <Lonesoldier728> ok thanks guys
[14:53:01] <StephenLynx> since you are writing a social media system
[14:53:13] <StephenLynx> I would suggest you to consider a relational database for the more
[14:53:16] <StephenLynx> central data
[14:53:18] <GothAlice> StephenLynx: https://github.com/marrow/cache#readme
[14:53:46] <Lonesoldier728> I heard that before but rather use mongodb
[14:53:48] <Lonesoldier728> at least for now
[14:53:54] <StephenLynx> social media systems are very relational.
[14:53:57] <StephenLynx> so keep that in mind.
[14:53:58] <GothAlice> I'm also using MongoDB to replace Celery: https://github.com/marrow/task#readme
[14:55:26] <GothAlice> with MarrowTaskExecutor() as executor: executor.submit(hello, "World") # And it's that simple to use.
[14:56:08] <StephenLynx> so yeah, you don't need a cache on top of mongo, from what it seems, Lonesoldier728
[14:56:41] <GothAlice> Nor do you need another scalable infrastructure for realtime stuff; MongoDB capped collections work great as extremely low-latency push queues.
[14:56:57] <StephenLynx> if you are really concerned with efficiency on reads
[14:57:04] <StephenLynx> you can generate an HTML page with the data
[14:57:09] <StephenLynx> and serve this page
[14:57:18] <StephenLynx> and update the page when the data changes.
[14:57:41] <StephenLynx> or a json if you are serving to stuff that are not web browsers.
[14:57:51] <StephenLynx> you can even use gridfs to that.
[14:58:05] <StephenLynx> so you can keep everything on mongo.
[14:58:34] <GothAlice> Well, JSON if you are serving to browsers, anything else (for example, you could serve the raw BSON returned by the mongo client, or MessagePack for efficient sharing with C or other low-level code, etc) for everything else.
[14:59:12] <GothAlice> (I'm a fan of passing BOSN around, as it lets me use the client drivers on various languages to process the data.)
[15:00:01] <cheeser> GothAlice: using the node drivers in the browser to handle the bson there?
[15:00:14] <seiyria> so I actually looked around in the nodejs mongo client source and I couldn't find any instance of 'was destroyed', let alone 'connection to host %host was destroyed'
[15:00:29] <GothAlice> cheeser: Nope. In the case of browser clients, JSON is the only way to go due to local optimizations.
[15:00:29] <StephenLynx> the problem with serving json to web browsers is that you are requiring the client to have js.
[15:01:08] <GothAlice> StephenLynx: Even my headless browsers for test automation have JS.
[15:01:16] <seiyria> StephenLynx, it's not unrealistic to assume a client has js
[15:01:24] <GothAlice> Even 'links', the ncurses text-based web browser has JS.
[15:01:26] <StephenLynx> last time I checked, about 15% of people do not have js enabled.
[15:01:40] <StephenLynx> is not just having, is having it enabled.
[15:01:50] <seiyria> then I'm not developing for that audience lol
[15:02:00] <StephenLynx> ok, that is your option.
[15:02:08] <StephenLynx> I just pointed that out.
[15:02:27] <StephenLynx> I never said it was completely unacceptable to require js.
[15:02:56] <GothAlice> StephenLynx: My own sites are built with 100% fallback. I.e. if you click an "action" link in a data table, by default JS will capture the event, perform an XHR, if the result is JSON it'll then parse the JSON in an attempt to figure out what the next step is which is usually displaying a modal. The modal HTML content is loaded as a second mime-multipart section in the returned XHR.
[15:03:20] <GothAlice> In the event JS is disabled, the link clicks through to the actual handler, the server recognizes it's not an XHR, and injects the modal content into the site template as if it were not a modal.
[15:03:26] <GothAlice> Bam: everything works, JS or not.
[15:03:31] <StephenLynx> good
[15:04:07] <StephenLynx> I just build assuming the user does not have JS in the first place and use JS for stuff that can't be done without or optional and more response methods of interaction.
[15:04:20] <StephenLynx> more responsive*
[15:04:30] <StephenLynx> like posting without reloading the whole screen and stuff
[15:04:45] <GothAlice> StephenLynx: http://s.webcore.io/2N1Q0a1s1b0Z < not a bit of JS involved in this, BTW.
[15:04:56] <GothAlice> You can do a lot without JS.
[15:05:11] <StephenLynx> The video could not be loaded
[15:05:29] <GothAlice> http://s.webcore.io/2N1Q0a1s1b0Z/WebCore%20Contentment.mov < link to original, apologies for .mov'ness.
[15:05:41] <StephenLynx> and I know. lynxchan is fully funcional without js.
[15:05:44] <StephenLynx> even dynamic captcha.
[15:06:13] <GothAlice> Captchas being ways to make legitimate user's lives more difficult, while not actually stopping automated attackers.
[15:06:38] <StephenLynx> it stops skiddies on a curl in a loop
[15:06:39] <Papierkorb> GothAlice: i guess that for the button -> input, the input has a opacity: 0 which you set to opacity: 1 with a :hover and a transition?
[15:06:51] <Papierkorb> err, I ment :active
[15:06:51] <StephenLynx> and I made it opt-in anyway.
[15:06:56] <MANCHUCK> Kittyauth is way better than captcha ;)
[15:06:58] <GothAlice> Papierkorb: The text before and after the input are :before and :after content sections. :)
[15:07:01] <seiyria> so, er, re:mongo has anyone seen this error message before? connection to host 127.0.0.1:3001 was destroyed
[15:07:06] <GothAlice> (But yeah, that's all CSS animations.)
[15:07:24] <StephenLynx> seiyria I see something similar when I shutdown the db server.
[15:07:33] <Papierkorb> and the rest are a ton of media queries. God I hate it when websites still use JS for layouting.
[15:07:34] <StephenLynx> it says topology was destroyed
[15:07:37] <MANCHUCK> Ive seen that with a TCP timeout
[15:07:37] <GothAlice> (And the "button" is actually the input, straight up. No element swapping going on.)
[15:07:44] <seiyria> StephenLynx, that's strange. my server shouldn't actually be dying.
[15:07:56] <GothAlice> seiyria: Maybe the connection is just timing out due to inactivity?
[15:07:56] <StephenLynx> is it really dying?
[15:08:00] <seiyria> I have no idea.
[15:08:07] <StephenLynx> you have to check it out then
[15:08:31] <seiyria> I have a long running cursor: let appCursor = allDB.Apps.find(query, {storeId: 1}, {sort: {ratingCount: -1, rating: -1}, timeout: false});
[15:08:31] <seiyria> and I have the intention of using this cursor to get entries out one at a time for a few hours at least
[15:08:45] <GothAlice> Cursors don't live that long.
[15:08:51] <seiyria> even if I tell them to not timeout?
[15:08:54] <GothAlice> You're going to need to add retrying behaviour in an outer loop.
[15:09:07] <GothAlice> There's a server-side hard limit on the maximum duration, AFIK.
[15:09:34] <seiyria> hm. so that's weird, actually. it seems to time out for me after a specific number of records each time. I've gotten to exactly 17k yesterday, 2k today, and also 7k today
[15:09:36] <GothAlice> Setting "no timeout" as an option simply lets the cursor live as long as the maximum. I.e. it won't time out waiting for you to request the next batch of results, but that won't stop the overall cursor from being culled.
[15:09:51] <seiyria> the time duration seems to be wholly inconsistent between attempts
[15:10:09] <Papierkorb> StephenLynx: After googling a bit, it seems like 1.3% of all users turn of JS
[15:10:21] <StephenLynx> ah
[15:10:25] <StephenLynx> so it much less than I thought.
[15:10:46] <GothAlice> StephenLynx: More of my users attempt to "recover their password" before even signing up than aren't running JS. (For serious.)
[15:11:25] <Papierkorb> it's pretty hard to find recent numbers, the most recent ones are from 2013 http://ux.stackexchange.com/questions/45229/should-i-optimize-my-website-for-non-javascript-users but I honestly don't think that many actually disable JS anymore. Especially not those who use social media sites.
[15:11:36] <GothAlice> Or Netflix.
[15:11:43] <GothAlice> :P
[15:12:03] <Papierkorb> the US isn't the world :)
[15:12:09] <GothAlice> seiyria: I'd instrument the code to identify time spent waiting on the cursor vs. processing what the cursor returns.
[15:12:50] <GothAlice> Papierkorb: And yet the world steals US Netflix. XP (I mention that site because they recently had a big fair do do about upgrading the site's JS.)
[15:13:44] <GothAlice> seiyria: However, for any long running cursor, it's important to have a retry mechanism in an outer loop with sensible catching of certain exceptions. I.e. those timeout ones.
[15:14:15] <GothAlice> https://gist.github.com/amcgregor/4207375#file-3-queue-runner-py-L11-L19 being a naive example.
[15:14:30] <seiyria> GothAlice, the tricky part is that I only know the index of my last processed item and I'm not sure I can guarantee that if I $skip that many items thats I'll end up in the correct place
[15:14:48] <GothAlice> Sort on _id, track the last _id, $gt retry from there.
[15:14:50] <seiyria> I don't really want to store every id/etc that I've processed because there's over a million records
[15:14:56] <GothAlice> A la: https://gist.github.com/amcgregor/52a684854fb77b6e7395#file-worker-py-L85-L110
[15:15:06] <GothAlice> Don't store all _ids, only store the last one.
[15:15:19] <seiyria> well, I have them sorted right now by some criteria that's sensible for processing order
[15:15:29] <seiyria> although if I have to go through them all anyway, that's arbitrary and pointless
[15:15:48] <GothAlice> (And $gt will be infinitely faster than $skip. $skip requires generating, but throwing out, the skipped results, requiring a walk of the btree index. Slow as heck.)
[15:16:29] <seiyria> so, can I just do something like {$gt: 'myLastId'} ?
[15:16:35] <seiyria> if they're sorted that way of course
[15:16:37] <GothAlice> See the last link I gave.
[15:17:02] <seiyria> sure enough
[15:17:03] <seiyria> neat
[15:17:20] <d-snp> I have 2 config server clusters, both running on the same machines, both got corrupted
[15:17:30] <GothAlice> Note that this example code is across a capped collection; you might not want "tailable" and "await_data" options.
[15:17:41] <d-snp> is there some scenario that reproducably creates an inconsistent config cluster?
[15:17:42] <seiyria> yeah, I was going to ignore that
[15:18:55] <seiyria> alright, I'll give that a try. thanks
[15:21:11] <d-snp> GothAlice: do inconsistent config servers happen often?
[15:21:24] <GothAlice> d-snp: Haven't encountered one in my main cluster yet.
[15:21:30] <d-snp> hm
[15:21:45] <d-snp> it happened while creating a bunch of collections in a batch
[15:21:45] <GothAlice> (And that cluster dates back to 2.2 or so.)
[15:21:59] <GothAlice> d-snp: WiredTiger?
[15:22:01] <d-snp> possibly while deleting a bunch of collections
[15:22:01] <d-snp> yes
[15:22:07] <GothAlice> Yeah, don't use WiredTiger yet.
[15:22:16] <d-snp> should I file an issue, it's rather vague
[15:22:26] <GothAlice> (Not unless you also have a commercial support contract to get 10gen to help out with issues like this.)
[15:22:52] <GothAlice> I'd search around for an existing ticket first, but if you can't find one, try to create the smallest reproducible example and submit your own.
[15:24:35] <cheeser> GothAlice: have you tried the latest dev builds?
[15:25:11] <GothAlice> cheeser: I run stable. :P
[15:25:29] <cheeser> i'll be curious to see how 3.2 treats you.
[15:25:39] <GothAlice> I'd love to find out, once it's released. XP
[15:25:47] <cheeser> we've been running WT since before 3.0 and it works quite well for us.
[15:26:02] <cheeser> did those issues you filed get resolved?
[15:26:22] <GothAlice> Yup, seems to vary entirely based on load. My migration scripts, if forced to recalculate our pre-aggregated click stats from real click data, can reliably nuke an entire cluster, if that cluster is running WT.
[15:26:26] <GothAlice> All but two so far.
[15:26:35] <cheeser> *fingers crossed*
[15:26:47] <GothAlice> It's hilarious to watch MMS as the primary rotates across the cluster.
[15:27:08] <cheeser> it's "Cloud Manager" now, iirc. ;)
[15:27:17] <GothAlice> :P
[15:27:25] <GothAlice> "I'm primary!" "No, I'm primary!" "Haha! Now I'm primary!"
[15:29:10] <GothAlice> This is what I imagine a pool of segfaulting processes is thinking: https://youtu.be/5ZARafUuhpY?t=32s
[15:44:43] <d-snp> if mongodb allows stale reads, isn't it unsafe to use mongodb for the config servers?
[15:45:33] <d-snp> they're supposed to be linearizable right?
[15:45:47] <GothAlice> d-snp: Reads from secondaries can be stale by up to the replication lag, but primaries are linearized.
[16:02:21] <aadityatalwai> Anyone experienced weirdness with Mongo Cloud Manager role permissions? I have a role that has 'ClusterMonitor' and 'readAnyDatabase' access, but still can't execute 'serverStatus' for some reason. Any ideas what's happening?
[16:16:25] <d-snp> so, I fixed the config servers some time ago, but I still see "splitChunk failed - .... "could not acquire collection lock" .. does that mean stuff still isn't ok?
[16:16:44] <d-snp> or are these from locks requested a long time ago? should I restart mongos instances?
[16:55:35] <StephenLynx> GothAlice how would you go about storing ipv6 addresses in a numerical format in mongodb?
[16:56:00] <GothAlice> MongoDB (BSON, specifically) doesn't have a 128 bit numerical format.
[16:56:04] <StephenLynx> hm
[16:56:05] <GothAlice> So the short answer is: you can't.
[16:56:24] <StephenLynx> ok, so I will just store the string that represents it.
[16:56:33] <StephenLynx> I can parse and serialize it back and forth.
[16:56:49] <StephenLynx> now unrelated to mongo: you ever hard to handle ranges in ipv6?
[16:57:03] <StephenLynx> had*
[16:57:23] <GothAlice> Not personally, other than setting up network assignment ranges. Those are bit masks.
[16:57:46] <StephenLynx> k, thanks for the info
[17:00:47] <GothAlice> IPv6 is so excessive that my own personal assignment covers way, way, way too many hosts. :| (Last 8 bytes of the address.) It's crazy, but means I can self-assign silly addresses like …::dead:beef:cafe:babe
[17:03:22] <d-snp> mongo-main-10:27017 believes it is primary, but its election id of 55d6053ea13854151cf138c6 is older than the most recent election id for this set, 55d6059a130a3bb1ee7fae35
[17:03:30] <d-snp> this should resolve itself right?
[17:09:14] <d-snp> ah wtf
[17:09:23] <d-snp> this shard just won't believe he's not primary
[17:09:31] <d-snp> I restarted it and it still thinks it's primary
[17:09:46] <GothAlice> d-snpn: If it really is old, nuke it and let it re-sync from scratch.
[17:10:00] <d-snp> you mean nuke the entire dataset?
[17:10:21] <GothAlice> d-snpn: Only as long as a) you have backups, or b) you are entirely confident that the actual active primary right now contains the latest data.
[17:10:29] <GothAlice> d-snp, rather.
[17:10:50] <tubbo> can PostgreSQL's JSONB and MongoDB's BSON be used interchangeably?
[17:10:58] <GothAlice> tubbo: No.
[17:11:02] <d-snp> GothAlice: there's no active primary
[17:11:07] <GothAlice> Ah.
[17:11:10] <GothAlice> That'd be a problem.
[17:11:11] <d-snp> and the data is a bit too large to recreate on a whim
[17:11:22] <tubbo> GothAlice: is this because JSONB isn't actually the BSON standard?
[17:12:28] <GothAlice> tubbo: Considering its newness, could you dig up a link to the JSONB specification for me?
[17:12:47] <GothAlice> (Google isn't being very helpful for me in this regard.)
[17:13:16] <d-snp> oh it stopped saying stuff about shard4 so I think it's fine now
[17:13:25] <d-snp> the other shards also lost primaries..
[17:13:42] <tubbo> GothAlice: does JSONB (the postgres one) even have a spec? i could only find one for BSON, MongoDB's approach to this problem: http://bsonspec.org/
[17:13:54] <GothAlice> tubbo: Indeed, that's BSON, not JSONB.
[17:14:03] <tubbo> right, so JSONB is different?
[17:14:18] <tubbo> that's kinda what i wanted to know, also why is it different
[17:14:44] <tubbo> is this it? http://ubjson.org/
[17:15:05] <GothAlice> It appears to be a proprietary internal format for Postgres. I really can't seem to find any form of specification, and the client driver translates JSONB into ordinary objects/JSON, or looks to according to their docs.
[17:15:06] <GothAlice> Nope.
[17:15:11] <d-snp> jeez guys: conn2135] Weird shift of primary detected
[17:15:16] <d-snp> what's that :P
[17:49:40] <tubbo> GothAlice: yeah, it seems more like an internal thing. you can't ever get the "raw BSON" out from a JSONB type
[17:49:58] <GothAlice> tubbo: It's not BSON. Don't know what it is, but it won't be BSON. ;P
[17:50:36] <tubbo> that's why it was in quotes
[17:50:38] <tubbo> :D
[18:17:59] <GothAlice> Well this is curious: I can "mongo" into my local dev server, but "mongorestore" returns: "no reachable servers".
[18:18:58] <StephenLynx> maybe your system is preventing the restore executable to connect?
[18:19:30] <GothAlice> I actually went out my way to custom sign the mongo tools with my own developer cert to ensure the system level protections bypass them. ;P
[18:19:56] <GothAlice> And with --verbose, it worked.
[18:19:57] <GothAlice> :|
[18:20:02] <StephenLynx> welp
[18:20:18] <StephenLynx> bug?
[18:21:27] <deathanchor> I call them gremlins
[18:21:36] <GothAlice> Nah, something else is going on. 3/4 runs with --verbose worked, 1/5 runs without --verbose worked, with no particular pattern to the successes and failures.
[18:21:55] <StephenLynx> yeah
[18:22:06] <deathanchor> that's what happens when you feed them after midnight.
[18:22:32] <GothAlice> deathanchor: During any given day, all times are > 00:00:00, thus it's _never_ a good idea to feed gremlins.
[18:22:33] <GothAlice> ;P
[18:24:50] <samsamsams> hey all i am trying to have a new node join an existing replica set. when i do rs.add(<addr>) i get some weird output on my new machine: https://gist.github.com/samuraisam/c7ff7f14b49e1b710891
[18:24:52] <samsamsams> ultimately it never joins with "annot find self in new replica set configuration; I must be removed; NodeNotFound No host described in new configuration 11 for replica set gReplSetWest10 maps to this node"
[18:25:34] <deathanchor> samsamsams: hostname mismatch?
[18:25:40] <GothAlice> Yeah, check your DNS.
[18:26:21] <GothAlice> (run 'hostname -f' on the new node; make sure both sides can resolve that name.)
[18:26:39] <samsamsams> deathanchor how can i tell?
[18:26:45] <samsamsams> it's all ec2 servers
[18:26:58] <samsamsams> ok
[18:27:07] <samsamsams> i will run that thank you deathanchor GothAlice
[18:28:06] <deathanchor> samsamsams: look at line 1 and line 40
[18:28:16] <deathanchor> the config is getting changed
[18:28:36] <deathanchor> it was in it on line 1 but then removed on line 40
[18:28:41] <samsamsams> yeah i see
[18:28:48] <samsamsams> the hostnames seem fine
[18:28:58] <samsamsams> do you think MMS (cloud.mongodb.com) is making that change?
[18:29:09] <samsamsams> i am trying to add the node by hand instead of by mms
[18:29:29] <samsamsams> using rs.add() rather than the MMS interface
[18:34:10] <cheeser> you probably shouldn't be manually mucking with managed services like that.
[18:34:29] <samsamsams> trying to migrate off MMS fwiw
[18:34:40] <StephenLynx> ok, so I will store ips as a number array and ranges as the first half of the array. what do you think about that?
[18:34:52] <StephenLynx> I can support both ipv4 and 6
[18:55:50] <fartface> I'm working on a meteor project (new to both meteor and mongo), and part of what I'm looking to do is to run a "find" on a nested attribute value. I'm guessing the mongo syntax is pretty similar to the meteor (minimongo) syntax, but can anyone point me in the right direction? https://gist.github.com/jonwest/e878507c1844d29e0087
[18:55:59] <fartface> There's what I'm trying to do and the problem I have
[18:57:06] <cheeser> i wouldn't store yearAdded as a string
[18:57:39] <cheeser> but you query would look something like { tags.yearAdded: '2015' }
[18:57:53] <fartface> cheeser: It's just sample data, and only conceptually similar to my actual problem (simplified for asking purposes)
[18:57:59] <fartface> Ah!
[18:58:01] <fartface> Colon syntax.
[18:58:13] <deathanchor> don't for the not: tags.title : { $ne : "Another Title" }
[18:58:20] <cheeser> http://docs.mongodb.org/manual/tutorial/query-documents/
[18:58:26] <atomicb0mb> hello guys, i have some doubts about mongodb university course. Is this the right place to ask? Its about week 2 importing reddit
[18:58:54] <cheeser> atomicb0mb: reddit is not a support forum. reporting problems there is *extremely* unlikely to get noticed.
[18:59:09] <cheeser> atomicb0mb: you should post such things to the mongodb-users lists
[18:59:11] <cheeser> list
[19:01:02] <atomicb0mb> Im sorry cheeser , i wasn't clear. The problem is when I use request module to import a .json file (from the reddit site, but it could be just another one)
[19:02:07] <fartface> In Meteor, I get a javascript error when I use {tags.title} (unexpected '.')
[19:02:16] <fartface> So I must be going about it the wrong way
[19:02:30] <fartface> I've asked in there but nobody seems to be around, I'll keep trying haha, thanks guys!
[19:03:22] <StephenLynx> I suggest you don't use meteor
[19:03:25] <fartface> OH! I know how I need to do it thanks to the docs
[19:03:28] <fartface> Why not use Meteor
[19:03:52] <atomicb0mb> everything worked ok. I could retrieve the json, parse it, and insert to my database. But when i do a console.dir(data) I got {_bsontype: 'ObjectID', id: 'UÖ!\u001bLÏWåC¾4' }. But in the example that I downloaded, instead of consoling that, i got the actually data that was importing.
[19:04:06] <StephenLynx> because any web framework is a pointless overhead. it will add a number of bugs, vulnerabilities, will eat you performance and will not provide anything good.
[19:04:32] <atomicb0mb> So... i pick up my file, and move into the example folder... and it worked fine... So the problem was with the "node_module" folder of example... thats maybe because of the versions?
[19:04:41] <fartface> So abstraction is pointless overhead?
[19:05:17] <StephenLynx> depends on what you are abstracting.
[19:05:31] <fartface> I get where you're coming from, but like, the time required to build a prototype in something like Meteor, or using something like jQuery even, the time required to get a prototype together vs scratch building, like the benefits definitely outweigh the shortcomings.
[19:05:45] <StephenLynx> can you put that on a graph?
[19:06:16] <StephenLynx> I can put all the bugs, vulnerabilities, performance issues and complexity points on one.
[19:07:37] <StephenLynx> I at least hope you have a year or two of solid experience with node/io before you started using meteor.
[19:07:48] <StephenLynx> so you can know what you are doing behind the scenes.
[19:08:16] <fartface> I'll put it another way--if you're teaching someone how to read, do you give them The Art of War and tell them the only way to read is to go balls deep, or do you start them off on something short and simple and introduce complex rules as needed
[19:08:30] <fartface> Two different schools of thought really
[19:08:38] <StephenLynx> but giving someone a whole book is exactly what you do with a framewok.
[19:09:14] <StephenLynx> introducing the basics would be using the runtime environment vanilla, so the person can understand the base tool.
[19:09:56] <StephenLynx> without that, the whole thing is pretty much black magic.
[19:10:40] <StephenLynx> you will be just mindlessly writing code if you don't understand the consequence of your work.
[19:11:00] <fartface> Totally hear where you're coming from, but if I'm learning about "x" in node, and have nowhere to use it, I'm going to forget it. At least with this approach I'm learning a brief overview and as needed I can delve deeper into things which require more insight
[19:11:09] <fartface> If that makes sense
[19:11:13] <StephenLynx> it doesnt.
[19:11:25] <StephenLynx> you could do the same things you are doing without the framework.
[19:11:37] <StephenLynx> and apply the newly gained knowledge.
[19:11:59] <StephenLynx> that is a moot point.
[19:12:55] <deathanchor> you don't give someone a chainsaw if they don't know what a saw is.
[19:13:02] <StephenLynx> exactly.
[19:13:10] <fartface> OK, so if I'm building a site that'll take a day in Meteor, I should rather spend a year learning node, then spend another few weeks rebuilding the site from scratch in node, because my year of learning node and scratch code will somehow have less bugs than an entire team of experienced developers.
[19:13:29] <StephenLynx> you are exaggerating those time estimates.
[19:13:36] <StephenLynx> it doesn't take a year using node to build a site.
[19:13:45] <deathanchor> the devs who "use mongo" here don't know the basics and are constantly getting things wrong with how they setup the data models and queries.
[19:14:01] <deathanchor> I'm always prodding them to change this code or that
[19:14:07] <fartface> deathanchor: That angle I can understand.
[19:14:12] <fartface> Square peg in a round hole.
[19:14:30] <StephenLynx> you didn't even try using node without a framework
[19:14:40] <StephenLynx> and you say it would take this whole time to build something
[19:14:46] <StephenLynx> that is plain laziness.
[19:14:55] <yopp> Hi. I'm trying to setup x509 authentication on sharded cluster
[19:15:28] <yopp> but I'm getting weird error when trying to set it with .yml config: Unrecognized option: net.ssl.mode
[19:15:39] <fartface> Do you need to know how to raise a cow in order to cook a steak?
[19:16:19] <StephenLynx> do you hurr to durr?
[19:16:33] <StephenLynx> can you herp so you can derp?
[19:16:44] <StephenLynx> just admit you ran out of arguments.
[19:17:00] <StephenLynx> instead of justify what you did.
[19:17:08] <StephenLynx> you are objectively wrong.
[19:17:56] <StephenLynx> I am not telling you to dig silicon and print a CPU
[19:18:03] <fartface> http://media2.giphy.com/media/Fml0fgAxVx1eM/giphy.gif
[19:18:07] <fartface> Gonna get back to work.
[19:18:12] <fartface> Appreciate the advice, thanks.
[19:18:14] <StephenLynx> ebin meme :^)
[19:18:23] <StephenLynx> upboated
[19:21:25] <yopp> uh. bad timing
[19:28:34] <fartface> Nodes a framework too--why would you use node to create a server when you could write straight javascript to do the same thing, that's the same argument, node is just introducing its own set of bugs and complexities.
[19:29:39] <fartface> If Meteor abstracts those concepts away and results in a quicker build, even if it's got its own set of problems, it results in a workable application that can be improved upon instead of some conceptual vapourware that never makes it onto a screen
[19:30:31] <fartface> That's the argument. It's not about whether I should or shouldn't learn something--of course knowing more is better, that's a shit argument.
[19:30:55] <fartface> But like I said, I do appreciate the help in figuring out what I needed to figure out--I got it sorted, and now like I said, back to work.
[19:31:52] <ciwolsey> Better tell everyone the most starred full stack framework on github is no good
[19:32:12] <ciwolsey> XD
[19:34:04] <fartface> ciwolsey: I'm not saying it's no good, it's an absolutely amazing piece of software that I'm eternally grateful for--I'm saying that to shit on using frameworks for the sake of shitting on a framework is an extremely narrow-minded thing to do
[19:34:28] <cheeser> frameworks++
[19:36:50] <fartface> What do you mean
[19:37:14] <ciwolsey> I was talking to everyone but you fartface
[19:37:24] <fartface> ciwolsey: Ah, cheers
[19:37:59] <ciwolsey> im a full time meteor dev
[19:39:05] <fartface> I hear where he's coming from, obviously it's ideal to know what's going on in the background for when things go wrong, but it's not necessary to know every little in and out before building an application, you'd never get anything built
[19:40:50] <appleboy> anyone know which windows clients for mongodb support the encryption used in 3.0.5 for authentication? mongovue and robomongo don’t
[19:48:19] <StephenLynx> fartface
[19:48:33] <StephenLynx> good luck listening to http without V8 under
[19:48:43] <StephenLynx> if you think node is mainly based on js
[19:48:46] <StephenLynx> you don't know node.
[19:49:21] <fartface> It was an analogy. I never claimed to 'know node'.
[19:49:32] <StephenLynx> then it was an awful analogy.
[19:49:36] <StephenLynx> and you got no point there.
[19:50:09] <fartface> Do you use jQuery
[19:50:19] <StephenLynx> no.
[19:50:20] <fartface> Or .net?
[19:50:22] <StephenLynx> no
[19:50:27] <StephenLynx> but .net is different
[19:50:31] <fartface> How so
[19:50:38] <fartface> It's a framework, isn't it
[19:50:40] <StephenLynx> no
[19:50:40] <fartface> Conceptually?
[19:50:44] <StephenLynx> .net is a vm
[19:51:07] <fartface> Like Java?
[19:52:22] <StephenLynx> yes.
[19:52:33] <StephenLynx> and even then
[19:52:41] <StephenLynx> that kind of framework serves a purpose
[19:52:46] <fartface> So does Meteor.
[19:52:49] <StephenLynx> it provides an interface to something.
[19:52:52] <StephenLynx> meteor doesn't.
[19:52:54] <fartface> So does Meteor.
[19:53:06] <StephenLynx> it provides an abstraction to an interface.
[19:53:20] <StephenLynx> its different than .net or sdk
[19:53:37] <fartface> That's fair--I'll give you that.
[19:54:19] <fartface> But let's say you're doing some shit tier app in VB.net, is it stupid to use VB.net without knowing VB inside and out?
[19:54:50] <fartface> You don't need to use the little draggers and interface crap in order to build the app, so isn't the app just introducing complexity and bugs?
[19:55:08] <StephenLynx> again
[19:55:10] <fartface> UI, not interface
[19:55:28] <fartface> It's making building the app simpler?
[19:55:32] <fartface> Mother of god what a concept!
[19:55:46] <StephenLynx> in that case the .net environment is not meant to be used directly, you are supposed to use the framework they provide to the engine.
[19:55:53] <StephenLynx> you are comparing oranges and apples.
[19:56:10] <fartface> OK, fine, I'll go another route that I'm more familiar with
[19:56:20] <fartface> You are familiar with jQuery, even if you don't use it yourself, yes?
[19:56:33] <StephenLynx> yes.
[19:56:48] <fartface> You feel as though using jQuery is akin to using Meteor
[19:57:19] <fartface> They're both evil frameworks
[19:57:41] <StephenLynx> not evil
[19:57:42] <StephenLynx> just bad.
[19:57:59] <StephenLynx> they are not trying to harm you in any way
[19:58:01] <fartface> So literally everything you code, you code from scratch.
[19:58:05] <StephenLynx> not really.
[19:58:12] <StephenLynx> I use libraries to handle standards
[19:58:27] <StephenLynx> like sending e-mails, cryptography and stuff
[19:58:29] <fartface> And how is using a library that much more distanced from using a framework?
[19:58:43] <fartface> A framework is a collection of libraries and tools, is it not?
[19:58:45] <StephenLynx> because these things are set in stone and do not change my software design.
[19:58:50] <StephenLynx> is not.
[19:58:59] <StephenLynx> a framework put all you code inside a FRAME
[19:59:41] <fartface> So when I'm using jQuery, as a framework, it's not possible to write native javascript...
[20:00:16] <fartface> Meteor doesn't set things in stone either.
[20:00:17] <StephenLynx> it is, but then you are not using jquery
[20:00:24] <StephenLynx> hold on
[20:00:36] <StephenLynx> the things that are set in stone are the standards that justify using libraries
[20:00:48] <StephenLynx> you don't care how an e-mail is sent, it doesn't affect your program.
[20:00:58] <StephenLynx> you just want the message to reach its destination.
[20:01:07] <StephenLynx> those are the things I use a library
[20:01:30] <fartface> And if I don't care how a page is served, and it doesn't affect my program so long as it gets done, and I use Meteor in order to do that...
[20:01:44] <StephenLynx> the thing is
[20:01:50] <StephenLynx> there is much more to "a page being served"
[20:01:51] <fartface> No I understand what you're getting at
[20:01:53] <StephenLynx> is not a standard
[20:01:56] <StephenLynx> there isn't a documentation for that.
[20:02:09] <StephenLynx> there isn't a document on ISO saying how that is done
[20:03:23] <StephenLynx> there is one for HTTP though
[20:03:47] <StephenLynx> so its justifiable to use a library that handles requests and responses using HTTP
[20:05:01] <fartface> Alright, back to jQuery then, since it's apples and apples
[20:05:09] <fartface> And I feel like we're getting off track
[20:06:01] <fartface> Actually fuck this, why am I arguing over the internet, I've got shit to do.
[20:06:23] <fartface> Agree to disagree.
[20:06:26] <StephenLynx> no
[20:06:37] <fartface> You can hate on frameworks all you want, but they're not bad, period.
[20:06:41] <fartface> They're a means to an end.
[20:06:43] <StephenLynx> you are wrong.
[20:06:59] <fartface> In your mind, sure. I'm wrong.
[20:07:39] <fartface> But there are millions of businesses and developers using those same tools every day, and the world is still turning, so I have to side with their successes over your feigned frustration with them.
[20:07:49] <StephenLynx> yeah, there is also millions of people using PHP
[20:07:54] <StephenLynx> you are using a fallacy there
[20:07:56] <StephenLynx> ad populum
[20:07:58] <StephenLynx> or something
[20:08:03] <cheeser> hopefully not *millions*...
[20:08:07] <StephenLynx> you are saying "X is popular, so X is good"
[20:08:13] <StephenLynx> that is just a fallacy
[20:08:25] <deathanchor> I like frameworks, until they don't do what I want, like django is nice, but doesn't play well with mongo as it's primary.
[20:09:47] <StephenLynx> yeah, frameworks put a wall on what you can or can't do
[20:09:47] <fartface> I'm not saying "x is popular so x is good", I'm saying "x is being used by millions and hasn't completely fucked them over, therefore I'm probably safe to use x"
[20:10:50] <fartface> I'm not saying because it's popular it's good, I'm saying "I can see what other people have created with X, and I would like to create something, therefore X is an option"
[20:10:54] <StephenLynx> that depends on your notion of safe.
[20:10:56] <StephenLynx> for example
[20:10:59] <StephenLynx> is it safe to eat pizza?
[20:11:09] <StephenLynx> is it safe to eat pizza every day on every meal?
[20:11:09] <fartface> Man, is everything binary to you?
[20:11:26] <StephenLynx> but my point is exactly non-binary
[20:11:37] <fartface> No, you're making an extremely binary point.
[20:11:45] <fartface> "ALL frameworks are bad"
[20:11:48] <fartface> That's binaryu.
[20:11:57] <fartface> That's the very definition of binary.
[20:12:04] <fartface> ALL or NONE, binary.
[20:12:17] <fartface> True, or False.
[20:12:32] <StephenLynx> I just said that frameworks that provide an interface to something are good
[20:12:40] <fartface> And that using them is bad
[20:12:40] <StephenLynx> like sdk or .net
[20:12:45] <StephenLynx> I never said that.
[20:13:17] <StephenLynx> I said the opposite, that you should use them in this case, because the tool was designed to be used behind the provided framework
[20:30:57] <tubbo> "All frameworks are bad" - fartface
[20:30:59] <tubbo> lol
[20:31:11] <tubbo> i'm not making a judgment call on the quote, just that it was spoken by a man named fartface
[20:31:13] <tubbo> or woman
[20:31:15] <tubbo> whatever
[20:34:36] <afroradiohead> I semi-agree
[20:36:10] <cheeser> so ... "some frameworks are bad" ?
[20:36:20] <cheeser> because that's a reasonable thing to think.
[20:37:17] <afroradiohead> "some frameworks are good, some frameworks are bad, all frameworks are good, all frameworks are bad"
[20:37:28] <afroradiohead> yeah some sounds more reasonable
[20:37:44] <StephenLynx> frameworks that don't provide an interface to something that is supposed to be interfaced are bad.
[20:37:47] <StephenLynx> that is my point.
[20:37:48] <StephenLynx> cheeser
[20:39:06] <cheeser> "something that is supposed to be interfaced" is a meaningless abstraction
[20:39:14] <StephenLynx> is not.
[20:39:25] <StephenLynx> that comes from the person designing this thing.
[20:39:33] <cheeser> it means nothing to *me* so ... whatever
[20:39:41] <StephenLynx> you can design with a purpose to have it being used directly.
[20:40:03] <StephenLynx> or to have a framework to access its features and the final developer using the framework.
[20:40:16] <StephenLynx> examples:
[20:40:25] <StephenLynx> sdk, unreal engine, android
[20:40:29] <StephenLynx> jdk*
[20:41:35] <afroradiohead> is there ever a final developer.. muahaha
[20:42:03] <StephenLynx> from the person developing the tool's perspective, yes.
[20:42:27] <StephenLynx> the person making an app to publish on google play, the person making a java program, the person making a game with unreal engine
[20:42:55] <StephenLynx> the people developing these tools work with this final developer in mind.
[20:43:30] <afroradiohead> so they provide a framework for these "final developers" to build on?
[20:44:03] <StephenLynx> yes.
[20:44:07] <afroradiohead> mm
[20:44:39] <StephenLynx> this is what I meant when I said about frameworks that provide an interface.
[20:45:22] <StephenLynx> when you write browser javascript, for example, you are supposed to write js directly using the interface provided by browser vendors.
[20:45:34] <StephenLynx> jquery adds a layer of abstraction on top of this interface
[20:45:51] <deathanchor> can you $hint with the aggragation framework?
[20:46:09] <deathanchor> docs don't say anything about use it for aggregation
[20:51:54] <MacWinne_> got a nuance question on mongo indexes and how they grow.. I have about 5million documents that have an indexed field. these are old documents. All new documents do not contain this field. Do the new documents somehow increase the existing index size?
[20:52:17] <deathanchor> did you use sparse?
[20:52:23] <MacWinne_> ie, can I leave the index in place and not worry about it growing as new documents come in that do not contain the index'd field.. or are indexes somehow storying negative values?
[20:52:40] <MacWinne_> deathanchor, I don't recall specifying that when creating the index.. is that something done at creation time?
[20:53:06] <deathanchor> MacWinne_: http://docs.mongodb.org/manual/core/index-sparse/
[20:53:25] <MacWinne_> awesome, thanks !
[20:53:53] <deathanchor> if it's not sparse, it is storing it for docs with null (or missing) fields
[20:54:01] <deathanchor> that's the gist
[20:54:34] <deathanchor> sparse indexes ignore docs without the field or null values
[22:11:21] <Doyle> Hey. I'm watching a mongodb server sync atm and am wondering why it stops receiving data from the sync target while it's performing the rsSync Index Build tasks.
[22:56:57] <Xapht> Hello everyone! I have a mongoDB document (https://gist.github.com/hoittr/2292fc2c978db5d95ac5).. And was wondering how I would construct a query to return only the totalCount and totalValue fields from each nested layer of the doc. (Would make sense when looking at the doc)