PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 10th of November, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:27:45] <Logicgate> hey guys
[00:27:49] <Logicgate> here's my query: http://pastie.org/9708541
[00:27:49] <joannac> hi
[00:28:14] <Logicgate> the data structure goes as follow {hash: <string>, clicks: <array>}
[00:28:37] <Logicgate> I would like to group by hashes and provide the length of the clicks array in the results
[00:29:07] <Logicgate> Is there a way to access the length property of the array in either the reduce function or the key function?
[00:30:58] <Logicgate> Anybody?
[00:31:42] <toothrot> yea, you can from reduce
[00:31:59] <Logicgate> cursor.item.clicks.length?
[00:32:20] <toothrot> cursor ?
[00:32:39] <Logicgate> reduce: function (cursor, result) {...
[00:33:40] <Logicgate> toothrot, I'm not sure how to access that property in reduce.
[00:34:10] <toothrot> i was thinking of: http://docs.mongodb.org/manual/reference/method/db.collection.group/#db.collection.group
[00:34:39] <Logicgate> toothrot, I've been reading that page all day
[00:34:39] <toothrot> where the reduce function takes (current_doc, agg_results)
[00:34:52] <Logicgate> I do current_doc.clicks.length
[00:35:06] <Logicgate> "errmsg" : "exception: TypeError: Cannot read property 'length' of undefined at $group reduce setup",
[00:35:09] <Logicgate> that's what I get
[00:36:13] <toothrot> well, current_obj.clicks shouldn't be undefined.
[00:36:30] <Logicgate> hmm
[00:36:36] <Logicgate> is there a way to console log that out?
[00:36:37] <joannac> why don't your printjson(current_doc) and make sure it looks like how you expect?
[00:36:41] <Logicgate> see what properties are available
[00:36:45] <Logicgate> ok
[00:37:24] <toothrot> what are you passing for key ?
[00:38:24] <Logicgate> toothrot: {hash: 1}
[00:38:54] <Logicgate> do I need to pass clicks:1 too?
[00:39:11] <toothrot> no
[00:39:47] <toothrot> i'll try it here
[00:41:32] <Logicgate> toothrot, I'll use keyf instead.
[00:41:36] <toothrot> why?
[00:41:46] <Logicgate> because it works that way
[00:48:10] <toothrot> well, i still don't get the clicks field here either way
[00:50:58] <toothrot> oh, yes i do, i made a mistake
[00:51:31] <Logicgate> I have no problem using keyf, I don't want all of the array to show up in the results, I just wanted its count.
[00:52:06] <toothrot> Logicgate, https://bpaste.net/show/9f04f2c21e8a
[00:53:41] <Logicgate> interesting toothrot, I couldn't get that to work on my end
[00:54:08] <toothrot> (this was on 2.6.5)
[00:54:39] <Logicgate> I'm running 2.6.5
[00:54:45] <Logicgate> "errmsg" : "exception: group() can't handle more than 20000 unique keys",
[01:12:03] <tejas-manohar> anyone here use mongoose orm/odm
[01:12:05] <tejas-manohar> ?
[01:39:36] <tejas-manohar> http://pastebin.com/enmn0Dye - can you look at line no. 111 and after with async.waterfall things? im trying to remove the "invitation" associated with a user after he/she registers from the link (which is built by invitationId and jobId), here's the invitation model https://bpaste.net/show/660545642c6b
[02:11:15] <crocket> Mongo
[02:23:02] <crocket> Is MongoDB tied to javascript?
[02:23:09] <crocket> Can't I use purescript with mongo?
[02:27:16] <Boomtime> http://docs.mongodb.org/ecosystem/drivers/
[02:27:52] <crocket> Boomtime, no purescript
[02:28:00] <crocket> PureScript is compiled to javascript.
[02:28:31] <lqez> not even here http://docs.mongodb.org/ecosystem/drivers/community-supported-drivers/
[02:29:26] <Boomtime> it "compiles" to javascript?
[02:29:52] <Boomtime> then you can use the Node.js driver, or the shell right?
[02:30:31] <crocket> Boomtime, probably, but I haven't tried.
[02:30:50] <crocket> PureScript is supposed to be compiled to only javascript.
[02:30:56] <crocket> It is an altjs language.
[02:31:41] <edrocks> does mongo show things in json or bson?
[02:33:57] <cheeser> the shell displays in "json"
[04:52:30] <osirisx11> hi all
[04:52:51] <osirisx11> when I do $addToSet it pushes my array of items as a child array instead of all the array items into the array
[04:53:04] <osirisx11> "6b9c6e60-6894-11e4-bf37-5ffd60c52a1e",
[04:53:04] <osirisx11> [
[04:53:05] <osirisx11> "6c53fe90-6894-11e4-bf37-5ffd60c52a1e",
[04:53:54] <joannac> what?
[04:54:32] <Boomtime> because you are pushing an array
[04:55:43] <Boomtime> $addToSet treats whatever you give it as a scalar and pushes that to an array (creating the array if necessary)
[04:55:44] <joannac> http://docs.mongodb.org/manual/reference/operator/update/addToSet/#each-modifier
[05:00:20] <osirisx11> oh
[05:00:28] <osirisx11> is there a "push all of these array items" command?
[05:00:32] <osirisx11> so i don't have to do each
[05:00:48] <osirisx11> $each i see
[05:00:54] <osirisx11> thanks joannac
[05:01:23] <osirisx11> you guys rock take it easy :D
[05:55:14] <crocket> It seems mapreduce consumes a lot of memory in mongodb.
[06:11:09] <crocket> After I read Index Concepts in mongoDB manual, can I skip Index Tutorials in the manual?
[06:26:44] <joannac> crocket: for what purpose?
[06:27:07] <crocket> joannac, Making a simple chat application that stores previous chat data.
[06:27:18] <crocket> Chat data is stored in mongodb.
[06:27:40] <crocket> The chat data should be searchable quickly.
[06:29:31] <crocket> joannac, It's your turn.
[06:30:04] <joannac> okay, well you need indexes
[06:30:42] <joannac> what stage are you at?
[06:30:56] <joannac> if you're still in concept stage, i guess you can skip reading them for now
[06:31:10] <joannac> but you're want to read and get your indexes setup before deployment
[06:31:30] <crocket> joannac, I'm still reading Index Concepts in mongoDB manual.
[07:19:07] <crocket> Yo
[07:19:31] <joannac> yes?
[07:19:48] <crocket> Do I not need to read Index Tutorials to work with mongodb?
[07:19:58] <crocket> I'm almost finished with Index Concepts.
[07:24:57] <joannac> crocket: no, you definitely need to read that section
[07:25:05] <joannac> It will teach you how to actually create indexes
[07:25:11] <crocket> Doh
[07:34:08] <joannac> crocket: are you just reading the whole manual start to finish or something?
[07:34:27] <crocket> joannac, no
[07:34:34] <crocket> I skipped irrelevant sections.
[07:34:48] <crocket> For instance, I don't need replication and sharding.
[07:34:57] <crocket> I don't need Administration and Security right now.
[07:35:17] <crocket> I don't need to read references from start to end right now.
[07:35:25] <joannac> okay
[07:35:37] <crocket> However, it took days.
[07:35:45] <joannac> I suggest you actually try and build an application
[07:35:52] <joannac> there's only so far reading docs can take you
[07:37:25] <crocket> joannac, Your last comment is not grammatically correct as far as I know.
[07:39:34] <joannac> crocket: there is only so much progress you can make by reading docs, without actually getting your hands dirty
[07:41:13] <crocket> joannac, I just want to make sure I know core concepts before I start.
[08:06:28] <crocket> BooBoo
[08:41:15] <crocket> I've entered Indexing Tutorials!!!
[08:42:15] <Zelest> Indexes are cheating! Real men use full collection scans for every query
[08:43:14] <crocket> Zelest, Real men should be very slow...
[08:50:12] <Zelest> crocket, No woman like a man who finish faster eh? ;)
[08:50:27] <crocket> Zelest, Your analogy doesn't work.
[08:50:46] <Zelest> Sorry, I'll stfu.. too tired to be serious :P
[10:28:33] <orw> Running Ubuntu, tried to remove MongoDB and reinstalling it by doing apt-get purge to all relevant packages and removed /var/lib/mongodb, now after reinstall when trying to connect to mongo I'm getting connection refused, any ideas?
[10:29:50] <Bodenhaltung> What shows "netstat -tulpen |grep mongod"?
[10:30:45] <orw> tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 118 236457 17354/mongod
[10:31:21] <orw> when running `mongo` i get this: 2014-11-10T12:24:10.739+0200 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
[10:33:18] <Bodenhaltung> Ok, and "mongo --verbose --host 127.0.0.1"?
[10:33:51] <orw> works now, no idea why it didn't work a minute ago
[10:33:58] <orw> thanks!
[11:09:17] <Mmike> Hi, lads. What would be the best way to determine if I'm connected to a primary or secondaray server in replicaset, via python?
[11:09:34] <Mmike> I was thinking into issuing db.isMaster() and then checking for 'secondary' key value
[11:21:19] <Bodenhaltung> Mmike: Does this help? http://api.mongodb.org/python/current/api/pymongo/mongo_replica_set_client.html#module-pymongo.mongo_replica_set_client
[11:23:24] <Mmike> Bodenhaltung, yup, thnx
[11:23:38] <Mmike> Bodenhaltung, although, I'm using MongoClient, and i'm just issuing isMaster against db
[11:23:49] <Mmike> and then chekcing for 'secondary' key and it's value
[11:23:51] <Mmike> does what I need
[14:04:12] <tscanausa> how difficult is it to change a replica sets name?
[15:25:40] <nawadanp> Hi ! I'm affected by this bug : https://jira.mongodb.org/browse/SERVER-15369 , and I just wan't to know how to recover a database with a bad ns file on a RS member ? The only way i found is to dump a restore datas from the master, but it can take very long time on huge databases :(
[15:29:44] <GothAlice> Wow, VMware, you just keep giving me more reasons to not use you. Indeed, a dump can take a long time, but it's the only way to be sure, nawadanp.
[15:30:14] <GothAlice> Notably because indexes (the largest by number of the namespaces) get rebuilt when you import the data back.
[15:31:36] <nawadanp> Thx GothAlice for this answer. So I will wait during the dump/restore procedure
[15:32:38] <nawadanp> For information, I don't use VMWare, but the symptoms are the same
[15:35:09] <nawadanp> And, because i am a lucky boy, the master node is affected by an other issue (SERVER-15920), which cause random segfault on the mongod process. I hope it will not append during the dump/restore time...
[15:37:53] <GothAlice> nawadanp: You can actually bring the server offline and have mongodump operate across the on-disk files, to avoid mongod segfault issues.
[15:38:34] <GothAlice> nawadanp: See: http://docs.mongodb.org/manual/reference/program/mongodump/#cmdoption--dbpath
[15:40:34] <nawadanp> GothAlice, Yes, I know, and it's what I do. But the current master - and the last online server of the RS - is also affected by the segfault issue... So, I hope it will not do a segfault during the time when the secondary is offline... ;)
[15:42:56] <GothAlice> nawadanp: Intriguing that I haven't run into SERVER-15920 as I use a rather insane amount of Gentoo, then again, I also avoid map/reduce in favour of aggregate queries.
[15:49:57] <nawadanp> GothAlice, we have a 5 * 2 sharded cluster, but only 2 shards are affected by SERVER-15920... I've found some differences between the kernel conf of each nodes. After recompiling with the fine configuration, the issue seems resolved. I'm currently doing some test before updating the Jira issue
[15:50:29] <GothAlice> Indeed; it'd be awesome if you could upload the diff to Gentoo's bugzilla, too. (Link to the Gentoo ticket on the JIRA ticket.)
[15:52:21] <nawadanp> ofc, i just want to find the bad params in the kernel conf before posting it
[19:12:59] <huleo> hi
[19:13:16] <huleo> is mongo:// encrypted or not, after all?
[19:13:28] <huleo> (not data, just connection to db)
[19:13:49] <GothAlice> huleo: Depends on if you enable encryption or not. http://docs.mongodb.org/manual/tutorial/configure-ssl/
[19:17:21] <GothAlice> (Note that the Linux distro I use automatically compiles everything from source, by default. SSL is not an option enabled in the FOSS binary releases.)
[19:17:30] <huleo> it's about database on compose.io, mongohq before that - it's actually hard to find whether they support it
[19:17:38] <huleo> GothAlice: gentoo? :-)
[19:18:21] <GothAlice> huleo: Indeed. I can compile a kernel in 54 seconds from depclean… which is faster than it takes Ubuntu to download the binary image, so I feel the performance (and feature, in this case) improvements are well worth it. ;)
[19:18:46] <huleo> GothAlice: but but but...you got to download source first
[19:19:58] <GothAlice> huleo: Git fetch / checkout is pretty quick, yo. Faster than downloading *then* extracting a tarball.
[19:21:23] <GothAlice> (The XML package data and patches are themselves synced over rsync… also highly efficient.)
[19:22:43] <huleo> I'm out of the game, just now got myself a machine that doesn't need several hours to compile /anything/ ;)
[19:23:10] <huleo> I guess the bottom line is, hmm...how do I check whether connection to database is encrypted?
[19:23:57] <GothAlice> MongoDB is also somewhat painful to compile; it can't be parallelized safely (unlike the kernel compile example, which ran across 64 cores) and takes a huge amount of RAM during the process. (Pypy is worse, though.)
[19:25:15] <GothAlice> http://docs.mongodb.org/manual/reference/configuration-options/#net.ssl.mode — I only ever set my clusters to either disabled or requireSSL. That way when I need/want encryption, it can't *not* be used.
[19:25:19] <GothAlice> huleo: ^
[19:25:48] <huleo> that's server-side; but the server is not mine
[19:26:02] <huleo> how do I check whether the connection me/my application makes is encrypted?
[19:26:30] <GothAlice> What's the line you're using to connect?
[19:27:32] <GothAlice> (Or, more simply, which language/driver?)
[19:27:45] <GothAlice> huleo: http://docs.mongodb.org/manual/tutorial/configure-ssl-clients/
[19:28:05] <huleo> in my app? that's mongoose node.js lib, mongoose.connect('mongodb://user:password@server/db')
[19:28:51] <huleo> this one's interesting, "mongo" shell on my system doesn't have --ssl option ;)
[19:29:11] <GothAlice> huleo: a) It's not MongoDB Enterprise, and b) you didn't compile it yourself. So you don't get that option.
[19:29:22] <GothAlice> (This is mentioned rather explicitly in the tutorial.)
[19:30:03] <GothAlice> https://blog.compose.io/openssl-heartbleed-vulnerability/ would seem to indicate that compose.io support SSL operation, but you'd have to contact their technical support to identify if it's a valid option for your account.
[19:30:19] <huleo> yup, you're right
[19:33:17] <huleo> so the question would be: is connection between compose.io db and heroku app using it encrypted? hmm, got more interesting than I wanted
[19:38:57] <GothAlice> huleo: The safe, and most likely correct answer is "no".
[19:51:22] <AlexZan> Hey guys, I am adding a user with goe coordinates in toronto, and another in florida, then I am using a geoNear call to get users near toronto with maxDistance of 1, but I am getting florida in the results, any ideas?
[19:53:09] <GothAlice> AlexZan: geoNear ($near) AFIK orders the result by distance from that point. It doesn't filter.
[19:53:34] <GothAlice> To do that you'd need to $within filter the geo point.
[19:53:39] <AlexZan> GothAlice, even when i have the optional maxDistance parameter filled to 1?
[19:53:46] <AlexZan> whats the point of that parameter then
[19:53:59] <GothAlice> Hmm; how did you set up your index?
[19:54:37] <AlexZan> GothAlice, sorry im not a db guy, would u like to see a pastebin of one of my documents?
[19:54:47] <AlexZan> user docs i mean
[19:55:41] <GothAlice> If you aren't sure how you created the index (or if you even did) it may be worthwhile to follow a lighter-weight tutorial until you are familiar with the concepts. http://myadventuresincoding.wordpress.com/2011/10/02/mongodb-geospatial-queries/ is a good example.
[19:55:53] <GothAlice> (Note that for real-world locations by lat/long, you should use 2dsphere, not 2d as the index type.)
[19:56:38] <GothAlice> Also http://docs.mongodb.org/manual/tutorial/build-a-2dsphere-index/ and http://docs.mongodb.org/manual/tutorial/query-a-2dsphere-index/
[19:59:10] <AlexZan> GothAlice, i am using an ORM so i dont think i created, or will have ever
[19:59:24] <AlexZan> oh that thing
[19:59:25] <AlexZan> yes one sec
[20:01:41] <AlexZan> GothAlice, http://pastebin.com/1Xy0sD7E
[20:04:03] <huleo> AlexZan: first of all AFAIK it's "lon, lat", not "lat, lon" for 2dsphere
[20:04:06] <huleo> (GeoJSON)
[20:04:19] <AlexZan> oh shoot
[20:04:22] <AlexZan> okay thanks lol
[20:04:48] <Bodenhaltung> I have a simple query on 2 fields, nscanned and nreturned is 1, but it takes sometimes 150 millis, up to 1300
[20:04:50] <huleo> not sure if that solves your problem
[20:05:45] <GothAlice> (GeoJSON points contain a representation of position in x, y[, optionally z] directions. That means (easting, northing[, altitude]).
[20:05:55] <Bodenhaltung> I guess is it the hardware? ReplSet over internet, auth, ssl and iptables...replset member 2 CPUs 4GB Ram, collection size is ~400MB
[20:06:09] <GothAlice> (Thus yes, longitude *then* latitude.)
[20:07:34] <GothAlice> Bodenhaltung: When diagnosing issues like that, have you tried running the query on the DB primary itself?
[20:07:46] <GothAlice> That'll isolate network latency from the equation.
[20:08:30] <Bodenhaltung> GothAlice: ah, no, i will check
[20:09:17] <GothAlice> (Such large variation might simply be network congestion. I never see my own queries vary so wildly… except in development when the HDD spins down. ;)
[20:13:19] <Bodenhaltung> Hmm, directly on the primary via "--ssl --host 127.0.0..." = Fetched 1 record(s) in 112ms
[20:14:33] <AlexZan> oh i think i solved my problem
[20:14:53] <AlexZan> i am supposed to devide max distance by the distance multiplier.. which is really odd since the documentation says its in meters
[20:14:53] <huleo> AlexZan: :-)
[20:14:55] <AlexZan> but that fixed it
[20:15:55] <huleo> AlexZan: unless I'm missed something any convertion between radians/meters shouldn't be your trouble if you're using 2dsphere
[20:15:58] <huleo> I
[20:15:59] <huleo> *
[20:16:15] <huleo> conversion*
[20:16:27] <AlexZan> huleo, i didnt think so either, so Im pretty confused as to why its working now
[20:17:37] <huleo> make sure it's not only working, but actually working as it should :-)
[20:17:52] <GothAlice> AlexZan: Were you, in fact, correctly setting the index before? Without the index, the query might misbehave in the way you described as well.
[20:18:30] <AlexZan> GothAlice, not sure i follow
[20:18:49] <GothAlice> Without a geospatial index, geospatial queries might not be able to function (thus filter results).
[20:19:07] <GothAlice> Which would also exhibit your original symptom: a record was returned that shouldn't have been.
[20:19:53] <AlexZan> GothAlice, but i dont think i made any changes to my index, its always been 2dsphere
[20:20:03] <GothAlice> AlexZan: Good. That's what I was asking.
[20:20:39] <AlexZan> the only change i made was multiplying my maxDistance by the distanceMultiplier, which is confusing
[20:22:11] <GothAlice> AlexZan: Also why I try to debug issues without interference from ODM (object document mapper) wrappers. Which wrapper were you using again?
[20:22:38] <AlexZan> GothAlice, waterline, part of sails/node
[20:23:41] <AlexZan> so i just verified 3 test cases, and its expected behaviour, so the maxDistance must be multipled by the distanceMultipler, which seems redundant, and its undocumented :s
[20:24:36] <huleo> distanceMultiplier - why is it there at all?
[20:25:25] <GothAlice> huleo: How large is the planet you are modelling? What's the finest scale you wish to record (inches, feet?) distanceMultiplier is a scaling value that lets you adjust these.
[20:25:40] <AlexZan> huleo, good point, i think because originally i was working in legacy, now I am in GeoJSON, i think.. right? so i should just remove it
[20:26:05] <huleo> GothAlice: rather, is it applicable/relevant at all for 2dsphere?
[20:26:32] <GothAlice> Don't remove it. Use it and understand what it represents. How large is a radian of distance? (2π radians in a full circle…)
[20:26:36] <huleo> AlexZan: I'm just a rookie, but that's the idea - in 2dsphere everything is in meters
[20:26:59] <huleo> (or it isn't...?)
[20:28:21] <AlexZan> GothAlice, but the documentation says if you are using GeoJSON, it is already in meters, so should i really just multiply maxDistance by 1000, to go from meter to km, which will then match up with my distanceMultiplier radian to km conversion?
[20:28:21] <huleo> I mean queries
[20:28:26] <huleo> not lon lat themselves
[20:29:15] <GothAlice> It's both less complicated and more complicated than that.
[20:29:16] <GothAlice> http://docs.mongodb.org/manual/reference/glossary/#term-wgs84
[20:29:48] <mike_edmr> sounds complicated
[20:30:00] <AlexZan> huleo, thats what im confused about, seems like the maxDistance is in meters when using 2dsphere/GeoJSON, but i am not sure about the result?
[20:30:26] <GothAlice> If you store points as GeoJSON Points, you're storing decimal degrees of rotation (easting/northing) calculated against the WGS-84 standard.
[20:34:18] <AlexZan> hmm im pretty cofnused
[20:34:20] <AlexZan> confused*
[20:34:56] <GothAlice> What's the result of the difference (subtraction) between two points (described in long/lat decimal degrees)? A decimal degrees of difference. Not kilometers.
[20:35:36] <GothAlice> Thus to get the distance back in kilometres, you need to multiply it by the the number of KM in one degree (or radian, since that's what's used internally).
[20:35:55] <huleo> GothAlice: but that's exactly what 2dsphere is there for, to make querying by meters - not degrees - possible?
[20:36:17] <GothAlice> huleo: The opposite.
[20:37:17] <GothAlice> A "2d" index assumes a flat plane, thus distance calculations can be in whatever unit you want. (Scaled however you want.) When working with a "2dsphere", however, you're working in long/lat coords. You don't specify a location as a number of meters from the equator and GMT…
[20:37:26] <GothAlice> long/lat being measurements of angle.
[20:37:32] <huleo> nope
[20:37:37] <huleo> but
[20:37:43] <huleo> maxDistance goes in meters
[20:38:18] <huleo> locations in lon/lat, but distances - meters
[20:52:17] <GothAlice> Hmm. Having some fun trying to investigate this.
[20:53:19] <GothAlice> $geoNear and $maxDistance are not friends when using GeoJSON, from what I can tell, making the entire discussion moot. (On both legacy and GeoJSON stored data points against a 2dsphere index, independently tested.)
[20:54:29] <huleo> "not friends"?
[20:55:30] <GothAlice> points = pymongo.MongoClient().test.points; points.ensure_index([('loc', pymongo.GEOSPHERE)]); points.insert({'loc': {'type': 'Point', 'coordinates': [40, 5]}})
[20:55:47] <GothAlice> list(points.find({"loc" : SON([("$near", { "$geometry" : SON([("type", "Point"), ("coordinates", [40, 5])])}), ("$maxDistance", 10)])})) -> *boom*
[20:56:21] <GothAlice> $maxDistance does not apply to GeoJSON points, according to the error.
[20:56:35] <GothAlice> ("Can't canonicalize query: BadValue geo near accepts just one argument when querying for a GeoJSON point. Extra field found: $maxDistance: 10")
[20:57:36] <GothAlice> Though I was really using the lat/long data provided by http://distancebetween.info/amsterdam/adelaide for testing. (Real world, with sample distance in KM.)
[20:58:41] <AlexZan> GothAlice, hmm so what is your conclusion? :P
[20:59:32] <GothAlice> AlexZan: That I'm in the right business; one where I don't need to deal with all that bumph. ;P
[20:59:39] <AlexZan> :D
[21:00:16] <GothAlice> Previously I did, and dealing with different coordinate systems was an insane PITA. (USGS, military, local civilian, engineering…)
[21:01:02] <huleo> GothAlice: when querying by GeoJSON field, sure, maxDistance works and works perfectly fine
[21:01:12] <huleo> :p
[21:02:22] <GothAlice> huleo: Double-check the count of negatives in your statement. http://cl.ly/image/0s2D2v3Q293n != "works perfectly fine"
[21:03:18] <huleo> my query looks quite different
[21:03:32] <GothAlice> Pastebin it?
[21:05:28] <huleo> http://pastebin.com/P4q6gqzs
[21:05:41] <huleo> node.js mongoose here
[21:06:27] <huleo> btw is that Comic Sans MS in your console?
[21:07:20] <GothAlice> Comic Sans MS isn't fixed-width, so no. It's Monofur. (Designed for dyslexics as each character has a unique shape. I also use it in my IRC windows.)
[21:09:29] <huleo> it's pretty cool
[21:10:04] <drags> does anyone know the database command that corresponds to the mongo shell command "db.<collection>.getIndexes()" ?
[21:10:12] <drags> (or getIndexKeys() if possible)
[21:10:31] <GothAlice> drags: Run this in a mongo shell: db.foo.getIndexes
[21:10:33] <drags> not seeing anything in the "database commands" section of the manual's reference section
[21:10:44] <GothAlice> (Note my lack of paranthesis.)
[21:11:01] <drags> GothAlice: niiiice
[21:11:04] <GothAlice> Yeeeeah.
[21:11:05] <GothAlice> :3
[21:11:09] <drags> teaching me how to fish! exactly what I was hoping for :)
[21:11:19] <GothAlice> Most of the builtins are actually tiny wrappers like that.
[21:11:36] <GothAlice> (MongoDB "eats its own dogfood" by using itself to configure itself; meaning you can query indexes just like any other collection.)
[21:14:02] <drags> one more question GothAlice: when a built-in uses "this.", what should I replace that with in a script I want to run as "mongo test my_stats_script.js"?
[21:18:39] <AlexZan> sigh
[21:19:54] <AlexZan> I really dont know what to do, i guess ill just keep multiplying the maxDistance by 6371, not confident with that as i dont underestand how it works, but ive spent 2 hours on this now, and not any closer to understand what is going on
[21:20:14] <AlexZan> diving i mean
[21:26:05] <GothAlice> drags: On a method call like "foo.bar.baz()" "this" refers to the containing object ("bar"). For most commands this will be the collection or database. (db.foo.bar() vs. db.bar())
[21:27:23] <GothAlice> drags: However use the code of the commands as a hint, not gospel. "getIndexes" querying the "system.indexes" collection should be the hint you need to formulate your own queries.
[21:27:28] <GothAlice> (As an example.)
[21:36:54] <xxtjaxx> Hi! I'm using the mongodb driver for nodejs to connect to mongodb and insert Items. According to my written logs everything is fine and no error was raised however I see 2 things: 1) the mongod log constantly writes lines similar to this: http://paste.debian.net/131214/ and do document has been saved.
[21:38:10] <joannac> xxtjaxx: that just means you're opening and closing connections
[21:38:21] <xxtjaxx> I actually try to keep one persistent connection during the runtime of my application which is an express app. You can see the source code here: https://github.com/andreas-marschke/boomerang-express/tree/master/lib/backends/mongodb/index.js
[21:38:54] <xxtjaxx> joannac: which is strange. I can't seem to recall my to have set that somewhere :/
[21:42:11] <joannac> um
[21:42:16] <joannac> at the bottom you have w:0
[21:43:44] <cheeser> tsk tsk
[21:44:01] <xxtjaxx> cheeser: hm?
[21:44:22] <cheeser> w:0 is verboten
[21:44:50] <joannac> w:0 means "send my write and I don't care what happens"
[21:45:12] <joannac> so it's not surprising that you don't find out what happened to your writes
[21:45:17] <joannac> change that to w:1
[21:45:22] <xxtjaxx> done.
[21:46:27] <xxtjaxx> huh still nothing...
[21:48:10] <joannac> okay, well debug your code
[21:48:17] <joannac> are you actually connecting?
[21:48:22] <joannac> can you do a query?
[21:55:17] <xxtjaxx> I can see a multitude of these now: http://paste.debian.net/131220/ which is strange too since it should have atleast one open connection and not connect/disconnect constantly
[21:56:10] <xxtjaxx> http://paste.debian.net/131221/ < this'd be my current connection
[21:56:23] <xxtjaxx> uh serverOptions
[22:19:28] <GothAlice> cheeser: "w:0 is verboten" Ha; love it. Also not exactly true… ref: my request/response logging which would hose performance I set it to w:1 ;^)
[22:20:00] <GothAlice> w:0 is definitely not good for diagnosing issues, OFC.
[22:21:41] <joannac> well, w:0 means you never know if the write succeeded or not
[22:22:08] <joannac> so i guess if you don't care if your data is actually in the database... then I guess it's okay?
[22:22:11] <GothAlice> :nods: w:0 is occasionally acceptable in logging situations where throughput matters more than the occasional lost record.
[22:22:56] <GothAlice> (Or the data can be otherwise rebuilt.)
[23:41:33] <cheeser> GothAlice: no. not *exactly* true. but if you know enough to know that you know enough to know when it's appropriate. ;)
[23:58:41] <GothAlice> .i ju'o.uo di'u ¬_¬ Can't figure out how to really say what I mean.