[01:39:36] <tejas-manohar> http://pastebin.com/enmn0Dye - can you look at line no. 111 and after with async.waterfall things? im trying to remove the "invitation" associated with a user after he/she registers from the link (which is built by invitationId and jobId), here's the invitation model https://bpaste.net/show/660545642c6b
[08:42:15] <Zelest> Indexes are cheating! Real men use full collection scans for every query
[08:43:14] <crocket> Zelest, Real men should be very slow...
[08:50:12] <Zelest> crocket, No woman like a man who finish faster eh? ;)
[08:50:27] <crocket> Zelest, Your analogy doesn't work.
[08:50:46] <Zelest> Sorry, I'll stfu.. too tired to be serious :P
[10:28:33] <orw> Running Ubuntu, tried to remove MongoDB and reinstalling it by doing apt-get purge to all relevant packages and removed /var/lib/mongodb, now after reinstall when trying to connect to mongo I'm getting connection refused, any ideas?
[10:29:50] <Bodenhaltung> What shows "netstat -tulpen |grep mongod"?
[10:31:21] <orw> when running `mongo` i get this: 2014-11-10T12:24:10.739+0200 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
[10:33:18] <Bodenhaltung> Ok, and "mongo --verbose --host 127.0.0.1"?
[10:33:51] <orw> works now, no idea why it didn't work a minute ago
[11:09:17] <Mmike> Hi, lads. What would be the best way to determine if I'm connected to a primary or secondaray server in replicaset, via python?
[11:09:34] <Mmike> I was thinking into issuing db.isMaster() and then checking for 'secondary' key value
[11:21:19] <Bodenhaltung> Mmike: Does this help? http://api.mongodb.org/python/current/api/pymongo/mongo_replica_set_client.html#module-pymongo.mongo_replica_set_client
[14:04:12] <tscanausa> how difficult is it to change a replica sets name?
[15:25:40] <nawadanp> Hi ! I'm affected by this bug : https://jira.mongodb.org/browse/SERVER-15369 , and I just wan't to know how to recover a database with a bad ns file on a RS member ? The only way i found is to dump a restore datas from the master, but it can take very long time on huge databases :(
[15:29:44] <GothAlice> Wow, VMware, you just keep giving me more reasons to not use you. Indeed, a dump can take a long time, but it's the only way to be sure, nawadanp.
[15:30:14] <GothAlice> Notably because indexes (the largest by number of the namespaces) get rebuilt when you import the data back.
[15:31:36] <nawadanp> Thx GothAlice for this answer. So I will wait during the dump/restore procedure
[15:32:38] <nawadanp> For information, I don't use VMWare, but the symptoms are the same
[15:35:09] <nawadanp> And, because i am a lucky boy, the master node is affected by an other issue (SERVER-15920), which cause random segfault on the mongod process. I hope it will not append during the dump/restore time...
[15:37:53] <GothAlice> nawadanp: You can actually bring the server offline and have mongodump operate across the on-disk files, to avoid mongod segfault issues.
[15:40:34] <nawadanp> GothAlice, Yes, I know, and it's what I do. But the current master - and the last online server of the RS - is also affected by the segfault issue... So, I hope it will not do a segfault during the time when the secondary is offline... ;)
[15:42:56] <GothAlice> nawadanp: Intriguing that I haven't run into SERVER-15920 as I use a rather insane amount of Gentoo, then again, I also avoid map/reduce in favour of aggregate queries.
[15:49:57] <nawadanp> GothAlice, we have a 5 * 2 sharded cluster, but only 2 shards are affected by SERVER-15920... I've found some differences between the kernel conf of each nodes. After recompiling with the fine configuration, the issue seems resolved. I'm currently doing some test before updating the Jira issue
[15:50:29] <GothAlice> Indeed; it'd be awesome if you could upload the diff to Gentoo's bugzilla, too. (Link to the Gentoo ticket on the JIRA ticket.)
[15:52:21] <nawadanp> ofc, i just want to find the bad params in the kernel conf before posting it
[19:13:16] <huleo> is mongo:// encrypted or not, after all?
[19:13:28] <huleo> (not data, just connection to db)
[19:13:49] <GothAlice> huleo: Depends on if you enable encryption or not. http://docs.mongodb.org/manual/tutorial/configure-ssl/
[19:17:21] <GothAlice> (Note that the Linux distro I use automatically compiles everything from source, by default. SSL is not an option enabled in the FOSS binary releases.)
[19:17:30] <huleo> it's about database on compose.io, mongohq before that - it's actually hard to find whether they support it
[19:18:21] <GothAlice> huleo: Indeed. I can compile a kernel in 54 seconds from depclean… which is faster than it takes Ubuntu to download the binary image, so I feel the performance (and feature, in this case) improvements are well worth it. ;)
[19:18:46] <huleo> GothAlice: but but but...you got to download source first
[19:19:58] <GothAlice> huleo: Git fetch / checkout is pretty quick, yo. Faster than downloading *then* extracting a tarball.
[19:21:23] <GothAlice> (The XML package data and patches are themselves synced over rsync… also highly efficient.)
[19:22:43] <huleo> I'm out of the game, just now got myself a machine that doesn't need several hours to compile /anything/ ;)
[19:23:10] <huleo> I guess the bottom line is, hmm...how do I check whether connection to database is encrypted?
[19:23:57] <GothAlice> MongoDB is also somewhat painful to compile; it can't be parallelized safely (unlike the kernel compile example, which ran across 64 cores) and takes a huge amount of RAM during the process. (Pypy is worse, though.)
[19:25:15] <GothAlice> http://docs.mongodb.org/manual/reference/configuration-options/#net.ssl.mode — I only ever set my clusters to either disabled or requireSSL. That way when I need/want encryption, it can't *not* be used.
[19:28:05] <huleo> in my app? that's mongoose node.js lib, mongoose.connect('mongodb://user:password@server/db')
[19:28:51] <huleo> this one's interesting, "mongo" shell on my system doesn't have --ssl option ;)
[19:29:11] <GothAlice> huleo: a) It's not MongoDB Enterprise, and b) you didn't compile it yourself. So you don't get that option.
[19:29:22] <GothAlice> (This is mentioned rather explicitly in the tutorial.)
[19:30:03] <GothAlice> https://blog.compose.io/openssl-heartbleed-vulnerability/ would seem to indicate that compose.io support SSL operation, but you'd have to contact their technical support to identify if it's a valid option for your account.
[19:33:17] <huleo> so the question would be: is connection between compose.io db and heroku app using it encrypted? hmm, got more interesting than I wanted
[19:38:57] <GothAlice> huleo: The safe, and most likely correct answer is "no".
[19:51:22] <AlexZan> Hey guys, I am adding a user with goe coordinates in toronto, and another in florida, then I am using a geoNear call to get users near toronto with maxDistance of 1, but I am getting florida in the results, any ideas?
[19:53:09] <GothAlice> AlexZan: geoNear ($near) AFIK orders the result by distance from that point. It doesn't filter.
[19:53:34] <GothAlice> To do that you'd need to $within filter the geo point.
[19:53:39] <AlexZan> GothAlice, even when i have the optional maxDistance parameter filled to 1?
[19:53:46] <AlexZan> whats the point of that parameter then
[19:53:59] <GothAlice> Hmm; how did you set up your index?
[19:54:37] <AlexZan> GothAlice, sorry im not a db guy, would u like to see a pastebin of one of my documents?
[19:55:41] <GothAlice> If you aren't sure how you created the index (or if you even did) it may be worthwhile to follow a lighter-weight tutorial until you are familiar with the concepts. http://myadventuresincoding.wordpress.com/2011/10/02/mongodb-geospatial-queries/ is a good example.
[19:55:53] <GothAlice> (Note that for real-world locations by lat/long, you should use 2dsphere, not 2d as the index type.)
[19:56:38] <GothAlice> Also http://docs.mongodb.org/manual/tutorial/build-a-2dsphere-index/ and http://docs.mongodb.org/manual/tutorial/query-a-2dsphere-index/
[19:59:10] <AlexZan> GothAlice, i am using an ORM so i dont think i created, or will have ever
[20:04:48] <Bodenhaltung> I have a simple query on 2 fields, nscanned and nreturned is 1, but it takes sometimes 150 millis, up to 1300
[20:04:50] <huleo> not sure if that solves your problem
[20:05:45] <GothAlice> (GeoJSON points contain a representation of position in x, y[, optionally z] directions. That means (easting, northing[, altitude]).
[20:05:55] <Bodenhaltung> I guess is it the hardware? ReplSet over internet, auth, ssl and iptables...replset member 2 CPUs 4GB Ram, collection size is ~400MB
[20:07:34] <GothAlice> Bodenhaltung: When diagnosing issues like that, have you tried running the query on the DB primary itself?
[20:07:46] <GothAlice> That'll isolate network latency from the equation.
[20:08:30] <Bodenhaltung> GothAlice: ah, no, i will check
[20:09:17] <GothAlice> (Such large variation might simply be network congestion. I never see my own queries vary so wildly… except in development when the HDD spins down. ;)
[20:13:19] <Bodenhaltung> Hmm, directly on the primary via "--ssl --host 127.0.0..." = Fetched 1 record(s) in 112ms
[20:14:33] <AlexZan> oh i think i solved my problem
[20:14:53] <AlexZan> i am supposed to devide max distance by the distance multiplier.. which is really odd since the documentation says its in meters
[20:16:27] <AlexZan> huleo, i didnt think so either, so Im pretty confused as to why its working now
[20:17:37] <huleo> make sure it's not only working, but actually working as it should :-)
[20:17:52] <GothAlice> AlexZan: Were you, in fact, correctly setting the index before? Without the index, the query might misbehave in the way you described as well.
[20:18:49] <GothAlice> Without a geospatial index, geospatial queries might not be able to function (thus filter results).
[20:19:07] <GothAlice> Which would also exhibit your original symptom: a record was returned that shouldn't have been.
[20:19:53] <AlexZan> GothAlice, but i dont think i made any changes to my index, its always been 2dsphere
[20:20:03] <GothAlice> AlexZan: Good. That's what I was asking.
[20:20:39] <AlexZan> the only change i made was multiplying my maxDistance by the distanceMultiplier, which is confusing
[20:22:11] <GothAlice> AlexZan: Also why I try to debug issues without interference from ODM (object document mapper) wrappers. Which wrapper were you using again?
[20:22:38] <AlexZan> GothAlice, waterline, part of sails/node
[20:23:41] <AlexZan> so i just verified 3 test cases, and its expected behaviour, so the maxDistance must be multipled by the distanceMultipler, which seems redundant, and its undocumented :s
[20:24:36] <huleo> distanceMultiplier - why is it there at all?
[20:25:25] <GothAlice> huleo: How large is the planet you are modelling? What's the finest scale you wish to record (inches, feet?) distanceMultiplier is a scaling value that lets you adjust these.
[20:25:40] <AlexZan> huleo, good point, i think because originally i was working in legacy, now I am in GeoJSON, i think.. right? so i should just remove it
[20:26:05] <huleo> GothAlice: rather, is it applicable/relevant at all for 2dsphere?
[20:26:32] <GothAlice> Don't remove it. Use it and understand what it represents. How large is a radian of distance? (2π radians in a full circle…)
[20:26:36] <huleo> AlexZan: I'm just a rookie, but that's the idea - in 2dsphere everything is in meters
[20:28:21] <AlexZan> GothAlice, but the documentation says if you are using GeoJSON, it is already in meters, so should i really just multiply maxDistance by 1000, to go from meter to km, which will then match up with my distanceMultiplier radian to km conversion?
[20:30:00] <AlexZan> huleo, thats what im confused about, seems like the maxDistance is in meters when using 2dsphere/GeoJSON, but i am not sure about the result?
[20:30:26] <GothAlice> If you store points as GeoJSON Points, you're storing decimal degrees of rotation (easting/northing) calculated against the WGS-84 standard.
[20:34:56] <GothAlice> What's the result of the difference (subtraction) between two points (described in long/lat decimal degrees)? A decimal degrees of difference. Not kilometers.
[20:35:36] <GothAlice> Thus to get the distance back in kilometres, you need to multiply it by the the number of KM in one degree (or radian, since that's what's used internally).
[20:35:55] <huleo> GothAlice: but that's exactly what 2dsphere is there for, to make querying by meters - not degrees - possible?
[20:37:17] <GothAlice> A "2d" index assumes a flat plane, thus distance calculations can be in whatever unit you want. (Scaled however you want.) When working with a "2dsphere", however, you're working in long/lat coords. You don't specify a location as a number of meters from the equator and GMT…
[20:37:26] <GothAlice> long/lat being measurements of angle.
[20:38:18] <huleo> locations in lon/lat, but distances - meters
[20:52:17] <GothAlice> Hmm. Having some fun trying to investigate this.
[20:53:19] <GothAlice> $geoNear and $maxDistance are not friends when using GeoJSON, from what I can tell, making the entire discussion moot. (On both legacy and GeoJSON stored data points against a 2dsphere index, independently tested.)
[20:56:21] <GothAlice> $maxDistance does not apply to GeoJSON points, according to the error.
[20:56:35] <GothAlice> ("Can't canonicalize query: BadValue geo near accepts just one argument when querying for a GeoJSON point. Extra field found: $maxDistance: 10")
[20:57:36] <GothAlice> Though I was really using the lat/long data provided by http://distancebetween.info/amsterdam/adelaide for testing. (Real world, with sample distance in KM.)
[20:58:41] <AlexZan> GothAlice, hmm so what is your conclusion? :P
[20:59:32] <GothAlice> AlexZan: That I'm in the right business; one where I don't need to deal with all that bumph. ;P
[21:00:16] <GothAlice> Previously I did, and dealing with different coordinate systems was an insane PITA. (USGS, military, local civilian, engineering…)
[21:01:02] <huleo> GothAlice: when querying by GeoJSON field, sure, maxDistance works and works perfectly fine
[21:06:27] <huleo> btw is that Comic Sans MS in your console?
[21:07:20] <GothAlice> Comic Sans MS isn't fixed-width, so no. It's Monofur. (Designed for dyslexics as each character has a unique shape. I also use it in my IRC windows.)
[21:11:09] <drags> teaching me how to fish! exactly what I was hoping for :)
[21:11:19] <GothAlice> Most of the builtins are actually tiny wrappers like that.
[21:11:36] <GothAlice> (MongoDB "eats its own dogfood" by using itself to configure itself; meaning you can query indexes just like any other collection.)
[21:14:02] <drags> one more question GothAlice: when a built-in uses "this.", what should I replace that with in a script I want to run as "mongo test my_stats_script.js"?
[21:19:54] <AlexZan> I really dont know what to do, i guess ill just keep multiplying the maxDistance by 6371, not confident with that as i dont underestand how it works, but ive spent 2 hours on this now, and not any closer to understand what is going on
[21:26:05] <GothAlice> drags: On a method call like "foo.bar.baz()" "this" refers to the containing object ("bar"). For most commands this will be the collection or database. (db.foo.bar() vs. db.bar())
[21:27:23] <GothAlice> drags: However use the code of the commands as a hint, not gospel. "getIndexes" querying the "system.indexes" collection should be the hint you need to formulate your own queries.
[21:36:54] <xxtjaxx> Hi! I'm using the mongodb driver for nodejs to connect to mongodb and insert Items. According to my written logs everything is fine and no error was raised however I see 2 things: 1) the mongod log constantly writes lines similar to this: http://paste.debian.net/131214/ and do document has been saved.
[21:38:10] <joannac> xxtjaxx: that just means you're opening and closing connections
[21:38:21] <xxtjaxx> I actually try to keep one persistent connection during the runtime of my application which is an express app. You can see the source code here: https://github.com/andreas-marschke/boomerang-express/tree/master/lib/backends/mongodb/index.js
[21:38:54] <xxtjaxx> joannac: which is strange. I can't seem to recall my to have set that somewhere :/
[21:55:17] <xxtjaxx> I can see a multitude of these now: http://paste.debian.net/131220/ which is strange too since it should have atleast one open connection and not connect/disconnect constantly
[21:56:10] <xxtjaxx> http://paste.debian.net/131221/ < this'd be my current connection
[22:19:28] <GothAlice> cheeser: "w:0 is verboten" Ha; love it. Also not exactly true… ref: my request/response logging which would hose performance I set it to w:1 ;^)
[22:20:00] <GothAlice> w:0 is definitely not good for diagnosing issues, OFC.
[22:21:41] <joannac> well, w:0 means you never know if the write succeeded or not
[22:22:08] <joannac> so i guess if you don't care if your data is actually in the database... then I guess it's okay?
[22:22:11] <GothAlice> :nods: w:0 is occasionally acceptable in logging situations where throughput matters more than the occasional lost record.
[22:22:56] <GothAlice> (Or the data can be otherwise rebuilt.)
[23:41:33] <cheeser> GothAlice: no. not *exactly* true. but if you know enough to know that you know enough to know when it's appropriate. ;)
[23:58:41] <GothAlice> .i ju'o.uo di'u ¬_¬ Can't figure out how to really say what I mean.