[00:15:22] <appleboy> has anyone ran into an issue with auth enabled where you can authenticate via code or command line but not with mongovue or robomongo? both those give invalid credentials, but they creds are copy/pasted in
[02:20:40] <samsamsams> hey all i am trying to have a new node join an existing replica set. when i do rs.add(<addr>) i get some weird output on my new machine: https://gist.github.com/samuraisam/c7ff7f14b49e1b710891
[02:21:11] <samsamsams> ultimately it never joins with "annot find self in new replica set configuration; I must be removed; NodeNotFound No host described in new configuration 11 for replica set gReplSetWest10 maps to this node"
[03:58:07] <hemmi> Hey there. I'm building an ad platform that has a notion of publishers, advertisers, sites, zones, creatives, campaigns and flights. My app won't actually be serving any ads because I'm just configuring a third-party ad network. This seems like a pretty good fit for a more traditional (rdbms) database. I am however interested in mongodb for its flexible
[03:58:07] <hemmi> schema for quicker development. Is there any reason against using mongo for something like this?
[05:02:37] <oromongo> Hello, simple question from a beginner: Can I add an element at a specific index? In my case I want to insert a big number of lines and add them at the right place by alphabetical order
[05:03:30] <oromongo> without keeping all the database in memory during the insert, of course
[07:44:37] <parallel21> When creating an index in mongodb
[07:44:42] <parallel21> How does one search on that index
[07:45:10] <parallel21> Or does that happen automatically?
[07:45:45] <parallel21> so when searching on a field that is not indexed, it will search the entire collection
[07:45:55] <parallel21> Where as if it were indexed, it would search the index?
[07:46:10] <rkgarcia> the index used it's automatic
[07:46:28] <rkgarcia> you need an index to search faster
[07:47:12] <MadWasp> Hello guys, I have a collection of contacts and I need to give every single contact a hash. This should be a sha1 out of the contact’s email and a secret field of another entity referenced with DbRef. Is there a command I can use for that or will I have to write a migration in some language?
[07:49:13] <rkgarcia> MadWasp, mongodb cli haven't sha1 function or i don't know
[09:06:06] <Kosch> hey guys. I need to build own rpms with ssl support, I compiled the 2.6.11 using scons on centos6 (scons -j 2 --64 --ssl all) and I'm wondering why the created executables are pretty big. (~350MB) Do I missed something?
[09:53:10] <m4k> How do I store data into redis after fetching it from mongo ? I'm using pymongo
[10:07:58] <Derick> m4k: better to ask in a redis channel
[12:22:53] <ggoodman> I have an existing replica set cluster running behind a firewall and would like to add authentication + authorization so that I can connect from machines not on the same private network.
[12:24:29] <ggoodman> I understand that the first step is to add root / admin users, then add a common keyFile to each server.
[12:24:49] <ggoodman> Is there a procedure where this can be done without downtime?
[12:27:17] <deathanchor> ggoodman: you have to have downtime because you have to turn on the auth option on the mongod and add the authentication to your client apps
[12:28:53] <ggoodman> deathanchor: thanks. Can you clarify if adding root and admin users will automatically propagate across the cluster and whether this will 'turn on' the requirement for client authentication?
[12:29:09] <deathanchor> ggoodman: I don't know if this would work, but add auth to your client apps, add users to db, restart secondary with auth, stepdown primary, restart primary with auth.
[12:29:42] <deathanchor> ggoodman: no auth isn't enforced until you turn on the auth option via commandline/conf file
[12:30:16] <ggoodman> It occurs to me that perhaps adding the keyFile will require down-time, what do you think?
[12:30:23] <deathanchor> I would test it out on a simple replicaset before doing anything in production
[12:31:24] <deathanchor> ggoodman: I'm not sure about the SSL options
[12:32:05] <ggoodman> Haha, you suggesting no cowboy ops?
[12:32:49] <deathanchor> ggoodman: I'm only cowboy about something I'm absolutely sure about.
[12:32:59] <deathanchor> hence the dry run before cowboying around prod.
[12:33:26] <ggoodman> Wise words. Thanks for the tips.
[12:55:32] <seiyria> hey all, I'm working with nodejs and I have a really long running operation. I'm lazyloading my data from mongo via a cursor, but the cursor gets exhausted after processing about 17k items (after about an hour or so of being open)
[12:55:47] <seiyria> there's about a million items so that's kindof a problem
[14:04:44] <d-snp> servers mongo-main-config-1:27019 and mongo-main-config-2:27019 differ
[14:04:48] <seiyria> on a scale of one to screwed.. better get the screwdriver
[14:05:24] <d-snp> could not verify that config servers are in sync :: caused by :: config servers mongo-main-config-1:27019 and mongo-main-config-2:27019 differ: { chunks: "f7cb87489701a0d48d7937a3fc81346e", collections: "56d343775451318c30204f93012b94a8", databases: "ad4d1c6e39fc63c2c8cfdfd4b28d3f50", shards: "7e5d93853282128782a8c0d2aaf2436b", version: "7f88f40e752a34474bd18e3ac6db8371" } vs { chunks: "b052fdc0049db74960b6c475bde18879", collections: "56d343775451318c30204f93012b94
[14:40:24] <Lonesoldier728> hey mongodb-ians I wanted to implement redis as a mem cache kind of deal to avoid constant same queries on mongo
[14:40:43] <Lonesoldier728> has anyone ever implement the two together or dealt with redis, trying to figure out if it makes sense
[14:41:17] <cheeser> at a previous gig we did that with couch
[14:41:45] <cheeser> but it was a CMS and we did some complex combinations of documents returned for a web request
[14:41:49] <StephenLynx> IMO it doesn't makes too much sense
[14:41:55] <StephenLynx> because mongo has it's own memcache.
[14:42:03] <cheeser> if you're just caching the untouched docs, it doesn't make much sense.
[14:43:45] <Lonesoldier728> for example this is what I thought I would use it for or does mongo do this already automatically and I just had no clue... lets say a person hits the home page, then mongo grabs the most recent 50 items, and if someone else hits it, same 50 items, figured just put the 50 items on redis and just grab them from there
[14:44:14] <StephenLynx> these 50 items would probably already be on mongo's RAM
[14:44:23] <StephenLynx> but I am not 100% sure on that.
[14:49:24] <d-snp> if your entire dataset fits in RAM using mongodb is probably alright
[14:49:34] <StephenLynx> I know at least GothAlice used to use a cache software like redis and then ditched it once implemented mongo on that scenario
[14:50:31] <Lonesoldier728> another question, I am working with android/iphone apps and followers/likes are something that can be done within the app, everytime a user does a follow/unfollow and like/unlike I am currently hitting the servers right away and updating it... is it taxing in the sense that it is better to cache it on the user's client side (sqlite) then after a certain time or something send them all at once... kind of confused if it is simila
[14:50:32] <Lonesoldier728> r to being an issue only when it comes to scaling
[14:51:40] <d-snp> Lonesoldier728: doing them in batches is more complex, what happens if your app exits before the changes are sent?
[14:53:58] <GothAlice> I'm also using MongoDB to replace Celery: https://github.com/marrow/task#readme
[14:55:26] <GothAlice> with MarrowTaskExecutor() as executor: executor.submit(hello, "World") # And it's that simple to use.
[14:56:08] <StephenLynx> so yeah, you don't need a cache on top of mongo, from what it seems, Lonesoldier728
[14:56:41] <GothAlice> Nor do you need another scalable infrastructure for realtime stuff; MongoDB capped collections work great as extremely low-latency push queues.
[14:56:57] <StephenLynx> if you are really concerned with efficiency on reads
[14:57:04] <StephenLynx> you can generate an HTML page with the data
[14:57:18] <StephenLynx> and update the page when the data changes.
[14:57:41] <StephenLynx> or a json if you are serving to stuff that are not web browsers.
[14:57:51] <StephenLynx> you can even use gridfs to that.
[14:58:05] <StephenLynx> so you can keep everything on mongo.
[14:58:34] <GothAlice> Well, JSON if you are serving to browsers, anything else (for example, you could serve the raw BSON returned by the mongo client, or MessagePack for efficient sharing with C or other low-level code, etc) for everything else.
[14:59:12] <GothAlice> (I'm a fan of passing BOSN around, as it lets me use the client drivers on various languages to process the data.)
[15:00:01] <cheeser> GothAlice: using the node drivers in the browser to handle the bson there?
[15:00:14] <seiyria> so I actually looked around in the nodejs mongo client source and I couldn't find any instance of 'was destroyed', let alone 'connection to host %host was destroyed'
[15:00:29] <GothAlice> cheeser: Nope. In the case of browser clients, JSON is the only way to go due to local optimizations.
[15:00:29] <StephenLynx> the problem with serving json to web browsers is that you are requiring the client to have js.
[15:01:08] <GothAlice> StephenLynx: Even my headless browsers for test automation have JS.
[15:01:16] <seiyria> StephenLynx, it's not unrealistic to assume a client has js
[15:01:24] <GothAlice> Even 'links', the ncurses text-based web browser has JS.
[15:01:26] <StephenLynx> last time I checked, about 15% of people do not have js enabled.
[15:01:40] <StephenLynx> is not just having, is having it enabled.
[15:01:50] <seiyria> then I'm not developing for that audience lol
[15:02:27] <StephenLynx> I never said it was completely unacceptable to require js.
[15:02:56] <GothAlice> StephenLynx: My own sites are built with 100% fallback. I.e. if you click an "action" link in a data table, by default JS will capture the event, perform an XHR, if the result is JSON it'll then parse the JSON in an attempt to figure out what the next step is which is usually displaying a modal. The modal HTML content is loaded as a second mime-multipart section in the returned XHR.
[15:03:20] <GothAlice> In the event JS is disabled, the link clicks through to the actual handler, the server recognizes it's not an XHR, and injects the modal content into the site template as if it were not a modal.
[15:03:26] <GothAlice> Bam: everything works, JS or not.
[15:04:07] <StephenLynx> I just build assuming the user does not have JS in the first place and use JS for stuff that can't be done without or optional and more response methods of interaction.
[15:06:13] <GothAlice> Captchas being ways to make legitimate user's lives more difficult, while not actually stopping automated attackers.
[15:06:38] <StephenLynx> it stops skiddies on a curl in a loop
[15:06:39] <Papierkorb> GothAlice: i guess that for the button -> input, the input has a opacity: 0 which you set to opacity: 1 with a :hover and a transition?
[15:08:07] <StephenLynx> you have to check it out then
[15:08:31] <seiyria> I have a long running cursor: let appCursor = allDB.Apps.find(query, {storeId: 1}, {sort: {ratingCount: -1, rating: -1}, timeout: false});
[15:08:31] <seiyria> and I have the intention of using this cursor to get entries out one at a time for a few hours at least
[15:08:45] <GothAlice> Cursors don't live that long.
[15:08:51] <seiyria> even if I tell them to not timeout?
[15:08:54] <GothAlice> You're going to need to add retrying behaviour in an outer loop.
[15:09:07] <GothAlice> There's a server-side hard limit on the maximum duration, AFIK.
[15:09:34] <seiyria> hm. so that's weird, actually. it seems to time out for me after a specific number of records each time. I've gotten to exactly 17k yesterday, 2k today, and also 7k today
[15:09:36] <GothAlice> Setting "no timeout" as an option simply lets the cursor live as long as the maximum. I.e. it won't time out waiting for you to request the next batch of results, but that won't stop the overall cursor from being culled.
[15:09:51] <seiyria> the time duration seems to be wholly inconsistent between attempts
[15:10:09] <Papierkorb> StephenLynx: After googling a bit, it seems like 1.3% of all users turn of JS
[15:10:25] <StephenLynx> so it much less than I thought.
[15:10:46] <GothAlice> StephenLynx: More of my users attempt to "recover their password" before even signing up than aren't running JS. (For serious.)
[15:11:25] <Papierkorb> it's pretty hard to find recent numbers, the most recent ones are from 2013 http://ux.stackexchange.com/questions/45229/should-i-optimize-my-website-for-non-javascript-users but I honestly don't think that many actually disable JS anymore. Especially not those who use social media sites.
[15:12:09] <GothAlice> seiyria: I'd instrument the code to identify time spent waiting on the cursor vs. processing what the cursor returns.
[15:12:50] <GothAlice> Papierkorb: And yet the world steals US Netflix. XP (I mention that site because they recently had a big fair do do about upgrading the site's JS.)
[15:13:44] <GothAlice> seiyria: However, for any long running cursor, it's important to have a retry mechanism in an outer loop with sensible catching of certain exceptions. I.e. those timeout ones.
[15:14:15] <GothAlice> https://gist.github.com/amcgregor/4207375#file-3-queue-runner-py-L11-L19 being a naive example.
[15:14:30] <seiyria> GothAlice, the tricky part is that I only know the index of my last processed item and I'm not sure I can guarantee that if I $skip that many items thats I'll end up in the correct place
[15:14:48] <GothAlice> Sort on _id, track the last _id, $gt retry from there.
[15:14:50] <seiyria> I don't really want to store every id/etc that I've processed because there's over a million records
[15:14:56] <GothAlice> A la: https://gist.github.com/amcgregor/52a684854fb77b6e7395#file-worker-py-L85-L110
[15:15:06] <GothAlice> Don't store all _ids, only store the last one.
[15:15:19] <seiyria> well, I have them sorted right now by some criteria that's sensible for processing order
[15:15:29] <seiyria> although if I have to go through them all anyway, that's arbitrary and pointless
[15:15:48] <GothAlice> (And $gt will be infinitely faster than $skip. $skip requires generating, but throwing out, the skipped results, requiring a walk of the btree index. Slow as heck.)
[15:16:29] <seiyria> so, can I just do something like {$gt: 'myLastId'} ?
[15:16:35] <seiyria> if they're sorted that way of course
[15:22:07] <GothAlice> Yeah, don't use WiredTiger yet.
[15:22:16] <d-snp> should I file an issue, it's rather vague
[15:22:26] <GothAlice> (Not unless you also have a commercial support contract to get 10gen to help out with issues like this.)
[15:22:52] <GothAlice> I'd search around for an existing ticket first, but if you can't find one, try to create the smallest reproducible example and submit your own.
[15:24:35] <cheeser> GothAlice: have you tried the latest dev builds?
[15:25:29] <cheeser> i'll be curious to see how 3.2 treats you.
[15:25:39] <GothAlice> I'd love to find out, once it's released. XP
[15:25:47] <cheeser> we've been running WT since before 3.0 and it works quite well for us.
[15:26:02] <cheeser> did those issues you filed get resolved?
[15:26:22] <GothAlice> Yup, seems to vary entirely based on load. My migration scripts, if forced to recalculate our pre-aggregated click stats from real click data, can reliably nuke an entire cluster, if that cluster is running WT.
[15:29:10] <GothAlice> This is what I imagine a pool of segfaulting processes is thinking: https://youtu.be/5ZARafUuhpY?t=32s
[15:44:43] <d-snp> if mongodb allows stale reads, isn't it unsafe to use mongodb for the config servers?
[15:45:33] <d-snp> they're supposed to be linearizable right?
[15:45:47] <GothAlice> d-snp: Reads from secondaries can be stale by up to the replication lag, but primaries are linearized.
[16:02:21] <aadityatalwai> Anyone experienced weirdness with Mongo Cloud Manager role permissions? I have a role that has 'ClusterMonitor' and 'readAnyDatabase' access, but still can't execute 'serverStatus' for some reason. Any ideas what's happening?
[16:16:25] <d-snp> so, I fixed the config servers some time ago, but I still see "splitChunk failed - .... "could not acquire collection lock" .. does that mean stuff still isn't ok?
[16:16:44] <d-snp> or are these from locks requested a long time ago? should I restart mongos instances?
[16:55:35] <StephenLynx> GothAlice how would you go about storing ipv6 addresses in a numerical format in mongodb?
[16:56:00] <GothAlice> MongoDB (BSON, specifically) doesn't have a 128 bit numerical format.
[17:00:47] <GothAlice> IPv6 is so excessive that my own personal assignment covers way, way, way too many hosts. :| (Last 8 bytes of the address.) It's crazy, but means I can self-assign silly addresses like …::dead:beef:cafe:babe
[17:03:22] <d-snp> mongo-main-10:27017 believes it is primary, but its election id of 55d6053ea13854151cf138c6 is older than the most recent election id for this set, 55d6059a130a3bb1ee7fae35
[17:03:30] <d-snp> this should resolve itself right?
[17:09:23] <d-snp> this shard just won't believe he's not primary
[17:09:31] <d-snp> I restarted it and it still thinks it's primary
[17:09:46] <GothAlice> d-snpn: If it really is old, nuke it and let it re-sync from scratch.
[17:10:00] <d-snp> you mean nuke the entire dataset?
[17:10:21] <GothAlice> d-snpn: Only as long as a) you have backups, or b) you are entirely confident that the actual active primary right now contains the latest data.
[17:11:11] <d-snp> and the data is a bit too large to recreate on a whim
[17:11:22] <tubbo> GothAlice: is this because JSONB isn't actually the BSON standard?
[17:12:28] <GothAlice> tubbo: Considering its newness, could you dig up a link to the JSONB specification for me?
[17:12:47] <GothAlice> (Google isn't being very helpful for me in this regard.)
[17:13:16] <d-snp> oh it stopped saying stuff about shard4 so I think it's fine now
[17:13:25] <d-snp> the other shards also lost primaries..
[17:13:42] <tubbo> GothAlice: does JSONB (the postgres one) even have a spec? i could only find one for BSON, MongoDB's approach to this problem: http://bsonspec.org/
[17:13:54] <GothAlice> tubbo: Indeed, that's BSON, not JSONB.
[17:15:05] <GothAlice> It appears to be a proprietary internal format for Postgres. I really can't seem to find any form of specification, and the client driver translates JSONB into ordinary objects/JSON, or looks to according to their docs.
[18:17:59] <GothAlice> Well this is curious: I can "mongo" into my local dev server, but "mongorestore" returns: "no reachable servers".
[18:18:58] <StephenLynx> maybe your system is preventing the restore executable to connect?
[18:19:30] <GothAlice> I actually went out my way to custom sign the mongo tools with my own developer cert to ensure the system level protections bypass them. ;P
[18:19:56] <GothAlice> And with --verbose, it worked.
[18:21:36] <GothAlice> Nah, something else is going on. 3/4 runs with --verbose worked, 1/5 runs without --verbose worked, with no particular pattern to the successes and failures.
[18:24:50] <samsamsams> hey all i am trying to have a new node join an existing replica set. when i do rs.add(<addr>) i get some weird output on my new machine: https://gist.github.com/samuraisam/c7ff7f14b49e1b710891
[18:24:52] <samsamsams> ultimately it never joins with "annot find self in new replica set configuration; I must be removed; NodeNotFound No host described in new configuration 11 for replica set gReplSetWest10 maps to this node"
[18:28:58] <samsamsams> do you think MMS (cloud.mongodb.com) is making that change?
[18:29:09] <samsamsams> i am trying to add the node by hand instead of by mms
[18:29:29] <samsamsams> using rs.add() rather than the MMS interface
[18:34:10] <cheeser> you probably shouldn't be manually mucking with managed services like that.
[18:34:29] <samsamsams> trying to migrate off MMS fwiw
[18:34:40] <StephenLynx> ok, so I will store ips as a number array and ranges as the first half of the array. what do you think about that?
[18:34:52] <StephenLynx> I can support both ipv4 and 6
[18:55:50] <fartface> I'm working on a meteor project (new to both meteor and mongo), and part of what I'm looking to do is to run a "find" on a nested attribute value. I'm guessing the mongo syntax is pretty similar to the meteor (minimongo) syntax, but can anyone point me in the right direction? https://gist.github.com/jonwest/e878507c1844d29e0087
[18:55:59] <fartface> There's what I'm trying to do and the problem I have
[18:57:06] <cheeser> i wouldn't store yearAdded as a string
[18:57:39] <cheeser> but you query would look something like { tags.yearAdded: '2015' }
[18:57:53] <fartface> cheeser: It's just sample data, and only conceptually similar to my actual problem (simplified for asking purposes)
[18:58:26] <atomicb0mb> hello guys, i have some doubts about mongodb university course. Is this the right place to ask? Its about week 2 importing reddit
[18:58:54] <cheeser> atomicb0mb: reddit is not a support forum. reporting problems there is *extremely* unlikely to get noticed.
[18:59:09] <cheeser> atomicb0mb: you should post such things to the mongodb-users lists
[19:01:02] <atomicb0mb> Im sorry cheeser , i wasn't clear. The problem is when I use request module to import a .json file (from the reddit site, but it could be just another one)
[19:02:07] <fartface> In Meteor, I get a javascript error when I use {tags.title} (unexpected '.')
[19:02:16] <fartface> So I must be going about it the wrong way
[19:02:30] <fartface> I've asked in there but nobody seems to be around, I'll keep trying haha, thanks guys!
[19:03:22] <StephenLynx> I suggest you don't use meteor
[19:03:25] <fartface> OH! I know how I need to do it thanks to the docs
[19:03:52] <atomicb0mb> everything worked ok. I could retrieve the json, parse it, and insert to my database. But when i do a console.dir(data) I got {_bsontype: 'ObjectID', id: 'UÖ!\u001bLÏWåC¾4' }. But in the example that I downloaded, instead of consoling that, i got the actually data that was importing.
[19:04:06] <StephenLynx> because any web framework is a pointless overhead. it will add a number of bugs, vulnerabilities, will eat you performance and will not provide anything good.
[19:04:32] <atomicb0mb> So... i pick up my file, and move into the example folder... and it worked fine... So the problem was with the "node_module" folder of example... thats maybe because of the versions?
[19:04:41] <fartface> So abstraction is pointless overhead?
[19:05:17] <StephenLynx> depends on what you are abstracting.
[19:05:31] <fartface> I get where you're coming from, but like, the time required to build a prototype in something like Meteor, or using something like jQuery even, the time required to get a prototype together vs scratch building, like the benefits definitely outweigh the shortcomings.
[19:05:45] <StephenLynx> can you put that on a graph?
[19:06:16] <StephenLynx> I can put all the bugs, vulnerabilities, performance issues and complexity points on one.
[19:07:37] <StephenLynx> I at least hope you have a year or two of solid experience with node/io before you started using meteor.
[19:07:48] <StephenLynx> so you can know what you are doing behind the scenes.
[19:08:16] <fartface> I'll put it another way--if you're teaching someone how to read, do you give them The Art of War and tell them the only way to read is to go balls deep, or do you start them off on something short and simple and introduce complex rules as needed
[19:08:30] <fartface> Two different schools of thought really
[19:08:38] <StephenLynx> but giving someone a whole book is exactly what you do with a framewok.
[19:09:14] <StephenLynx> introducing the basics would be using the runtime environment vanilla, so the person can understand the base tool.
[19:09:56] <StephenLynx> without that, the whole thing is pretty much black magic.
[19:10:40] <StephenLynx> you will be just mindlessly writing code if you don't understand the consequence of your work.
[19:11:00] <fartface> Totally hear where you're coming from, but if I'm learning about "x" in node, and have nowhere to use it, I'm going to forget it. At least with this approach I'm learning a brief overview and as needed I can delve deeper into things which require more insight
[19:13:10] <fartface> OK, so if I'm building a site that'll take a day in Meteor, I should rather spend a year learning node, then spend another few weeks rebuilding the site from scratch in node, because my year of learning node and scratch code will somehow have less bugs than an entire team of experienced developers.
[19:13:29] <StephenLynx> you are exaggerating those time estimates.
[19:13:36] <StephenLynx> it doesn't take a year using node to build a site.
[19:13:45] <deathanchor> the devs who "use mongo" here don't know the basics and are constantly getting things wrong with how they setup the data models and queries.
[19:14:01] <deathanchor> I'm always prodding them to change this code or that
[19:14:07] <fartface> deathanchor: That angle I can understand.
[19:28:34] <fartface> Nodes a framework too--why would you use node to create a server when you could write straight javascript to do the same thing, that's the same argument, node is just introducing its own set of bugs and complexities.
[19:29:39] <fartface> If Meteor abstracts those concepts away and results in a quicker build, even if it's got its own set of problems, it results in a workable application that can be improved upon instead of some conceptual vapourware that never makes it onto a screen
[19:30:31] <fartface> That's the argument. It's not about whether I should or shouldn't learn something--of course knowing more is better, that's a shit argument.
[19:30:55] <fartface> But like I said, I do appreciate the help in figuring out what I needed to figure out--I got it sorted, and now like I said, back to work.
[19:31:52] <ciwolsey> Better tell everyone the most starred full stack framework on github is no good
[19:34:04] <fartface> ciwolsey: I'm not saying it's no good, it's an absolutely amazing piece of software that I'm eternally grateful for--I'm saying that to shit on using frameworks for the sake of shitting on a framework is an extremely narrow-minded thing to do
[19:39:05] <fartface> I hear where he's coming from, obviously it's ideal to know what's going on in the background for when things go wrong, but it's not necessary to know every little in and out before building an application, you'd never get anything built
[19:40:50] <appleboy> anyone know which windows clients for mongodb support the encryption used in 3.0.5 for authentication? mongovue and robomongo don’t
[19:53:06] <StephenLynx> it provides an abstraction to an interface.
[19:53:20] <StephenLynx> its different than .net or sdk
[19:53:37] <fartface> That's fair--I'll give you that.
[19:54:19] <fartface> But let's say you're doing some shit tier app in VB.net, is it stupid to use VB.net without knowing VB inside and out?
[19:54:50] <fartface> You don't need to use the little draggers and interface crap in order to build the app, so isn't the app just introducing complexity and bugs?
[19:55:28] <fartface> It's making building the app simpler?
[19:55:32] <fartface> Mother of god what a concept!
[19:55:46] <StephenLynx> in that case the .net environment is not meant to be used directly, you are supposed to use the framework they provide to the engine.
[19:55:53] <StephenLynx> you are comparing oranges and apples.
[19:56:10] <fartface> OK, fine, I'll go another route that I'm more familiar with
[19:56:20] <fartface> You are familiar with jQuery, even if you don't use it yourself, yes?
[20:00:36] <StephenLynx> the things that are set in stone are the standards that justify using libraries
[20:00:48] <StephenLynx> you don't care how an e-mail is sent, it doesn't affect your program.
[20:00:58] <StephenLynx> you just want the message to reach its destination.
[20:01:07] <StephenLynx> those are the things I use a library
[20:01:30] <fartface> And if I don't care how a page is served, and it doesn't affect my program so long as it gets done, and I use Meteor in order to do that...
[20:06:59] <fartface> In your mind, sure. I'm wrong.
[20:07:39] <fartface> But there are millions of businesses and developers using those same tools every day, and the world is still turning, so I have to side with their successes over your feigned frustration with them.
[20:07:49] <StephenLynx> yeah, there is also millions of people using PHP
[20:07:54] <StephenLynx> you are using a fallacy there
[20:08:25] <deathanchor> I like frameworks, until they don't do what I want, like django is nice, but doesn't play well with mongo as it's primary.
[20:09:47] <StephenLynx> yeah, frameworks put a wall on what you can or can't do
[20:09:47] <fartface> I'm not saying "x is popular so x is good", I'm saying "x is being used by millions and hasn't completely fucked them over, therefore I'm probably safe to use x"
[20:10:50] <fartface> I'm not saying because it's popular it's good, I'm saying "I can see what other people have created with X, and I would like to create something, therefore X is an option"
[20:10:54] <StephenLynx> that depends on your notion of safe.
[20:13:17] <StephenLynx> I said the opposite, that you should use them in this case, because the tool was designed to be used behind the provided framework
[20:30:57] <tubbo> "All frameworks are bad" - fartface
[20:41:35] <afroradiohead> is there ever a final developer.. muahaha
[20:42:03] <StephenLynx> from the person developing the tool's perspective, yes.
[20:42:27] <StephenLynx> the person making an app to publish on google play, the person making a java program, the person making a game with unreal engine
[20:42:55] <StephenLynx> the people developing these tools work with this final developer in mind.
[20:43:30] <afroradiohead> so they provide a framework for these "final developers" to build on?
[20:44:39] <StephenLynx> this is what I meant when I said about frameworks that provide an interface.
[20:45:22] <StephenLynx> when you write browser javascript, for example, you are supposed to write js directly using the interface provided by browser vendors.
[20:45:34] <StephenLynx> jquery adds a layer of abstraction on top of this interface
[20:45:51] <deathanchor> can you $hint with the aggragation framework?
[20:46:09] <deathanchor> docs don't say anything about use it for aggregation
[20:51:54] <MacWinne_> got a nuance question on mongo indexes and how they grow.. I have about 5million documents that have an indexed field. these are old documents. All new documents do not contain this field. Do the new documents somehow increase the existing index size?
[20:52:23] <MacWinne_> ie, can I leave the index in place and not worry about it growing as new documents come in that do not contain the index'd field.. or are indexes somehow storying negative values?
[20:52:40] <MacWinne_> deathanchor, I don't recall specifying that when creating the index.. is that something done at creation time?
[20:54:34] <deathanchor> sparse indexes ignore docs without the field or null values
[22:11:21] <Doyle> Hey. I'm watching a mongodb server sync atm and am wondering why it stops receiving data from the sync target while it's performing the rsSync Index Build tasks.
[22:56:57] <Xapht> Hello everyone! I have a mongoDB document (https://gist.github.com/hoittr/2292fc2c978db5d95ac5).. And was wondering how I would construct a query to return only the totalCount and totalValue fields from each nested layer of the doc. (Would make sense when looking at the doc)