[01:13:58] <joannac> the syntax cheeser gave you looks fine
[01:14:09] <ParkerJohnston> it is not updating the DB
[01:14:21] <ParkerJohnston> caseInformation.updateOne(caseInformation.find(eq("history._id", id)).first(), new BasicDBObject("$set", new BasicDBObject("history.$.active", true)));
[04:40:55] <Jonno_FTW> does the geospatial support anything but wgs84?
[05:18:44] <keeger> if i have a document that holds a reference to another document, can i reference the collection of the reference like b[objectId] ?
[05:19:11] <georgij> Hi, I am using mongoose. I want to be able to do something like this with level being a ref. Elem.where('level').equals(5/*id ref of 5*/).where('level.parent').equals(10).exec(foo) /*it populates level when querying an inside property of level*/
[05:26:23] <keeger> let's say i want to grab the document for the city at cities[0], can i do: cities[cityObjectId] to get it, or do I need to do cities.findOne(cityObjectId)
[05:26:47] <joannac> keeger: erm, what language / ODM?
[05:27:12] <keeger> i was thinking mongo shell atm, i'll be using golang driver for this
[05:27:26] <joannac> in the shell, no. you ahve to do a findOne()
[05:27:55] <joannac> in general, even if there was a shortcut, it would still be a findOne() underneath
[05:28:49] <joannac> georgij: how mongoose populates stuff, I have no idea
[05:28:54] <keeger> i am trying to layout my schema, but i have some arrays of common data. i am considering the pros/cons of putting those in their own collections
[05:29:11] <joannac> you could try #mongoosejs, or you could wait to see if someone who knows mongoose can help
[05:29:29] <georgij> joannac: Thanks, I always manage to get into the wrong channel :)
[05:29:31] <joannac> keeger: pros - can't hit document limit. cons: more queries
[05:30:17] <keeger> joannac, i don't think i'll hit the doc limit really, but worried about doing 2 phase commits all over the place
[05:32:00] <keeger> and also doing a lof of queries hehe
[05:36:59] <keeger> does mongo support a concept of a view? like where I could query a document, and follow references ?
[06:10:32] <arussel> what do I put as _id in aggregate $group if I want to sum all documents ?
[06:11:08] <arussel> I don't really want to group, I just want a sum of a field of all matched documents
[10:25:27] <pamp> I need to rename a field , this field is inside an array, for example: P[i].V[i]._t, but in most cases the field V is not an array, example P[i].V
[10:25:48] <pamp> I need to rename the field "V", only when this is an array
[10:26:10] <pamp> http://dpaste.com/3415GT4 I creat this method
[10:27:01] <pamp> but when i verify if the field V is an array, and is not. I get an error "TypeError: d.P.v has no properties (shell):8"
[11:43:23] <d0x> hi, i need to process a few GB with a map reduce in a within seconds. on a single node it's to slow, so when i shard it, over two nodes and choose a shard key that splits it equally 50:50, its runtime should be almost a half, or?
[12:14:30] <boolman> Will there be any downtime if I add a arbiter node ( rs.addArb(HOST) ) to a two-node PRIMARY-SECONDARY "kluster" ? eq. re-election
[13:16:22] <Cygn> Hey everyone, i am trying to count the amount of values in a subarray of my documents, but right now i can only figure outhow to count the documents, has someone a hint for me?
[13:16:31] <Cygn> http://pastie.org/9995819 < Code and Data Example
[13:17:32] <Cygn> Right now it returns 2, but should return 3 (since the criteria matches 3 items in the sales subarrays of the data)
[13:20:01] <StephenLynx> haven't used it enough to be sure.
[13:22:27] <Cygn> StephenLynx: thx :) i will try that right now (and also find out what happens if empty)
[13:26:21] <Tomasso> im running mongo from the binary tgz , I attempt to connect with robomongo. The server says connection accepted, and robomongo says unable to connect... what could be wrong?
[13:26:40] <Cygn> StephenLynx: This works beatiful !
[13:27:46] <StephenLynx> Tomasso can you connect with the CLI command?
[13:27:56] <StephenLynx> it could be authentication issues?
[13:28:36] <Tomasso> StephenLynx: mmm i never setup authentication on server, at least yet..
[13:29:01] <StephenLynx> first, is the connection remote or local?
[13:29:16] <StephenLynx> because by default mongo is bounded to local connections only on 127.0.0.1
[13:29:36] <Tomasso> remote.. I also tried ./mongod --bind_ip 0.0.0.0
[13:29:58] <StephenLynx> if you are going to do that, make sure you set up authentication
[13:30:07] <StephenLynx> otherwise anyone will be able to connect to your db.
[13:30:19] <StephenLynx> and have you restarted the server after that?
[13:31:45] <Tomasso> mm yes.. also tried to check authentication in robomongo, without user or password.. and same result..
[13:32:29] <Tomasso> and on server side I dont get any errors.. just connection accepted
[13:34:06] <StephenLynx> if you have unbound it from localhost and restarted, then I don't know.
[14:11:32] <d0x> hi, i need to process a few GB with a map reduce within a "short" amount of time. On a single node it's to slow, so when i shard it, over n nodes and choose a shard key that splits it equally, its runtime should be divided by almost "n", or?
[14:20:15] <cheeser> d0x: if you "need" mapreduce you should use hadoop. alternately, aggregation might work for (dunno your processing needs) and would be faster.
[14:36:31] <StephenLynx> i know for a fact that sharding has some limitation. like when you do a group operation
[14:36:40] <g-hennux> is there a way to run mongod so that it will only initialize the database and then exits?
[14:37:10] <Rickky> I'm trying to pipe the output of a mongodump command directly into mongorestore in this format " mongorestore --username user -ppass --db DB_B --collection collection <(mongodump --username user2 -ppass --db DB_A --collection collection --out - 2>/dev/null | tail -n+2)" as suggested in https://jira.mongodb.org/browse/SERVER-4345
[14:37:23] <Rickky> the output I'm getting though: "connected to: 127.0.0.1 don't know what to do with file [/dev/fd/63]"
[14:39:01] <g-hennux> like, i need one command (for use in a Dockerfile) that will initialize the data in /var/lib/mongodb and then exit with exit code 0
[14:49:51] <pamp> it's possible open mongodb from shell, when it is already runnig as a service?
[15:13:54] <pamp> I want to see in the shell whats's happening on the server, instead of seeing the log file, but, I can't stop the instance (mongod)
[15:15:05] <GothAlice> pamp: Tail the oplog. Ref: http://docs.mongodb.org/manual/core/replica-set-oplog/ and https://github.com/cayasso/mongo-oplog as a simple way to interrogate the data stream.
[15:15:30] <d0x> cheeser: With using hadoop i need to maintain a 2nd infrastructure that loads all the data and proccess it. And the Aggegation framework has not enough features (no Userdefined functions, not enough String methods) and our application (saidly) doesn't precalculated them (like extracting the domain out of an URL)
[15:16:36] <d0x> But is my initial assumption right that the MR speed over all documents will be divided by N when using N shards?
[15:16:50] <d0x> Also will it scale almost linear?
[15:17:12] <GothAlice> d0x: Pro tip: (de)normalize your data properly and things become much easier. Pre-aggregation is freakishly awesome, and lets me produce a dashboard (like http://cl.ly/image/2W0a2D3I370F) running over 200 individual queries that still generates < 100ms.
[15:17:19] <GothAlice> d0x: Let me dig up a link for you on parallelization of map/reduce vs. aggregates.
[15:17:31] <GothAlice> d0x: http://pauldone.blogspot.ca/2014/03/mongoparallelaggregation.html here you go
[15:19:13] <d0x> GothAlice: I use the MR job to preaggregate the data (like i said extracting the domain out of a string url which can't be done by the Aggegration framework)
[15:20:20] <d0x> You say everytime the "application" should write the data in a proper format. But that is not possible here... It has it's schema which is optimised for daily production
[15:20:52] <d0x> and now i need to transform this data (preaggegation etc) to make aggregation queries run
[15:21:40] <d0x> And going to the boss saiy i need a another hadoop cluster for this is not that nice :(. Because of that I thought i can utalise our mongodb infrastcuture
[15:24:00] <GothAlice> d0x: The article I linked goes into some detail on the why of map/reduce execution not really being parallel out-of-the-box. Even when sharded, each shard needs to run the map/reduce itself, then the query router takes the results from each shard and further reduces. This means you're still effectively waiting for all data to get processed. You can gain better efficiency by chopping up the workload beforehand and running multiple "jobs"
[15:25:35] <GothAlice> (This goes for both map/reduce _and_ aggregates.)
[15:34:45] <d0x> When the map is distributed across all shards containing the data, then we have it (i think). Because currently the I do all the magic in the "map" and the reduce returns the first value only: http://pastebin.com/FsSQxer4
[15:35:02] <d0x> But I better read and understand now all the links you gave me
[15:41:58] <NET||abuse> hey guys, just doing a quick mongo primer, looking to do a restore of backedup data, so i dont need to shutdown mongodb server to do restores from mongodump?
[15:42:27] <StephenLynx> any word on the ubuntu repository? I'm using mongo's version, I guess, not the default one.
[15:42:29] <GothAlice> NET||abuse: There are two approaches: offline restore, and online restore. In an offline restore mongod must not be running, and mongorestore directly writes to the on-disk stripes.
[15:42:45] <GothAlice> NET||abuse: In online mode, it simply connects to a running mongod and dumps the data back in using standard wire protocol commands.
[15:43:58] <GothAlice> In both you can choose to write the data to a database other than the source database. With online mode this allows you to easily restore isolated snapshots which are separate from the general production database.
[15:46:10] <StephenLynx> nvm, I had to setup my repository information
[15:46:48] <NET||abuse> if i have 3 servers, and do a restore on the master (presumably htat's where you always want to do your restore) it gets sync'd to the secondary's?
[15:47:13] <NET||abuse> i guess if you're doing it using standard wire protocol then of course it will.
[15:47:36] <NET||abuse> is there a page in the docs for doing an offline mode restore?
[15:47:49] <GothAlice> NET||abuse: Correct. I do not believe mongorestore writes an oplog when restoring in offline mode, so the former secondaries will suddenly need to re-stream the entire dataset on next start.
[15:48:07] <NET||abuse> yeh, i figured that's true,,
[15:48:53] <NET||abuse> if i stop the primary, wipe out all the /data/mongodb/* files, and copy the backup files straight into place, restart that master, will i have to reconfigure some things? replica set nodes or things?
[15:50:00] <GothAlice> NET||abuse: Huh, I can't seem to find it in the online manual for mongorestore, but the option you're looking for is --dbpath
[15:50:10] <GothAlice> (That switches to offline mode, writing to stripes specified by the given path.)
[15:50:37] <NET||abuse> GothAlice: so i should stop the mongodb server first then do that?
[15:50:46] <GothAlice> NET||abuse: Also, mongodump dumps can't be restored by simply swapping them into place.
[15:51:32] <GothAlice> NET||abuse: Offline mode restore will be faster than online. Downside: each replica secondary will need to completely re-sync rather than trying to follow along during the online restore.
[16:11:55] <cheeser> d0x: you can use hadoop directly against mongo if you want.
[16:12:12] <cheeser> you'd still have that second infra but you wouldn't have to shuttle data around at least
[16:15:55] <d0x> cheeser: You mean instead of HDFS it uses the mongodb? And on all Mongoshards i could install an tasktracker?
[16:47:42] <GothAlice> Can't wait until work is over so I can play with the 3.0.0 release. :3
[17:05:14] <dcuadrado> the best way to upgrade to wiredtiger is to upgrade the secondary instances first, then step down and upgrade the master, right?
[17:05:34] <dcuadrado> congrats for the new release btw
[17:06:54] <Derick> dcuadrado: yes, that's the best procedure
[17:07:02] <Derick> dcuadrado: do test it in dev/staging first though!
[17:10:53] <Cygn> Can i adress attributes of a subarray in a projection? f.e. array.attribute or array['attribute'] - if i want only attribute to be returned?
[17:11:09] <dcuadrado> Derick: are you gonna make wiredtiger the default engine for 3.1?
[17:15:28] <Derick> dcuadrado: dor 3.2, yes, I think that's the plan
[17:29:49] <SpartanWarrior> hello guys, I tried converting my standalone mongod to a replica set, added the oplog size and replset name to mongod.conf and when restarted, I lost all my data! the /data directory however haves some files with my db name, any hints?
[17:44:18] <kbiddyBoise> So, just updated from 2.6.5 to 2.6.8, and not mongo isn
[17:44:33] <kbiddyBoise> 't accepting connections, server log just spits out "pthread_create failed: errno:11 Resource temporarily unavailable"
[17:44:44] <kbiddyBoise> any ideas would be much appriciated
[17:45:57] <Cygn> Is there also a possibility to select a attribute of a subarray when using distinct? My Documents have an attribute sale, which contains an array, that has multiple entries. I need to get every distinct attribute origin of all subarrays "sales" of all documents… but collection.distinct('sales.origin') seems not to be the right way to handle this.
[17:50:48] <JamesHarrison> okay, so here's a fun question - I'm seeing exceptionally slow responses on a newly migrated 3.0 WiredTiger install where the skip value is high (>90,000, for instance) in an otherwise simple query (single field equivalence query, single field order by)
[17:51:42] <JamesHarrison> is this a known issue, or am I doing something wrong? (Assuming that the skip field is there to be used)
[17:55:12] <GothAlice> JamesHarrison: Skip operations are log(n) operations, requiring (best case) the database to traverse a B-Tree index. Worst-case skip is a O(n) problem. In general it is far better to re-query with a natural offset (i.e. _id > previous_page[-1]._id) than to use a real skip.
[17:55:39] <JamesHarrison> GothAlice: yeah, reading https://jira.mongodb.org/browse/SERVER-13946 it looks like I'm going to have to do that for now at least
[17:55:59] <JamesHarrison> this seems to have gotten _much_ worse though, queries in MMAPv1/2.6.4 were <1s, are now >50s
[17:56:28] <GothAlice> You do seem to be using an unusually large offset.
[17:56:43] <JamesHarrison> The application is pagination on a forum - some topics have _lots_ of replies
[17:57:21] <GothAlice> JamesHarrison: My forum software does one of those "infinite scrolling" things, it uses _id $gt range querying to fetch batches of results.
[17:58:19] <JamesHarrison> I'll have to refactor to do something similar, aye - the pagination is done at present by a 3rd-party library (kaminari, in Ruby) so that's where that query is coming from
[17:59:09] <StephenLynx> one of these days I'm going to study ruby enough to bash it like I do with python.
[17:59:36] <JamesHarrison> all I'm actually concerned about is the fact that MMAPv1/2.6.4 appears to be 50 times faster than WT/3.0 on this particular use-case, and if that's considered acceptable or if I should raise a ticket
[17:59:42] <JamesHarrison> I know I can engineer around it
[18:00:26] <GothAlice> Certainly raise the warning flags by opening a ticket. 50x reduction in speed should be classified "unusual".
[18:00:38] <StephenLynx> I would raise a ticket. if the change was intended, someone will expose the reason.
[18:00:47] <JamesHarrison> sorry, 500 times faster, I misread my stats, was managing 100ms responses prior to that
[18:07:41] <StephenLynx> theres 3 documentation pages that I keep for offline reference:
[18:08:03] <StephenLynx> query and projection operators and aggergation operators
[18:09:37] <StephenLynx> I have the manual pdf too, but its pretty useless
[18:09:44] <Cygn> StephenLynx: Could be a good idea. Anyway, da after day using mongo i kinda get it now, just from time to time something isn't clear for me.
[18:10:20] <StephenLynx> it will be a while since you memorize all operators and how they work more or less. specially aggregation, it has lots of operators.
[18:55:35] <Cygn> StephenLynx: You're right, i just printed a cheatsheet :)
[19:02:53] <cobra-the-joker> Hey there every one , how can i make select * from <table> in mongoDB ? db.collection.find ( {} ) .... where do i write the collection name ?
[19:03:34] <rkgarcia> cobra-the-joker, the collection it's created in air
[19:05:11] <keeger> i'm looking at cluster strategies with mongo. if i start off with a replicate set, how hard is it to convert to a sharded cluster?
[19:05:35] <cobra-the-joker> rkgarcia: i created a collection in a .js file that was run on the server and now i want to make "select * from <that collection> "
[19:06:28] <StephenLynx> cobra-the-joker node.js or io.js?
[19:09:25] <StephenLynx> ok, then you need a pointer to the collection: var collection = db.collection('yourCollectionName');
[19:10:43] <StephenLynx> notice that this step will work either if the collection exists or not
[19:11:12] <StephenLynx> the collection will be created if it doesn't exist and deleted if it remains empty after you work with it
[19:11:36] <StephenLynx> after that you call functions on the collection variable, such as find or aggregate
[19:11:55] <StephenLynx> for find you will need to pass two object parameters and a function to be used as callback
[19:12:08] <StephenLynx> the first object parameter is the query one, the second the projection one
[19:12:44] <StephenLynx> I think it is intelligent if you don't pass some parameters, but I am not sure how much.
[19:13:07] <cobra-the-joker> StephenLynx: ok checking now
[19:13:40] <StephenLynx> after the operation is completed, the callback will be executed, your function must have two parameters for find: the first one will be the error and the second a cursor to the found results.
[19:14:02] <StephenLynx> so you can check if the error exists, if it doesn't, you know the operation succeeded.
[19:14:47] <StephenLynx> I don't know how to work with cursors because I always use aggregate.
[19:16:03] <StephenLynx> the main difference in the results is that aggregate always return a list with all results and find returns the cursor. so if you need to iterate through many objects, aggregate will eat your RAM.
[19:16:27] <StephenLynx> but if you need to output this data anyway, you can use aggregate without this issue.
[19:16:43] <StephenLynx> on the other hand, aggregate makes it easier to perform secondary operations, such as sort and limit.
[19:17:04] <StephenLynx> with find you need to call additional functions in your code that require callbacks.
[19:17:23] <StephenLynx> 2 pieces of advice since you are using node:
[19:17:34] <StephenLynx> 1- change to io.js. node.js is obsolete.
[19:17:54] <GothAlice> StephenLynx: Your definition of obsolete is suspect.
[19:17:56] <StephenLynx> 2- use this coding standards: https://github.com/felixge/node-style-guide
[19:18:29] <StephenLynx> python is obsolete for web.
[19:19:44] <GothAlice> Yup, totally why Facebook is using it to power their realtime stuff. ("Tornado" framework.) Amongst others. ;)
[19:21:13] <keeger> i was going to use node for my project
[19:22:30] <keeger> GothAlice, is changing a cluster from replica set to sharded difficult in mongo?
[19:22:54] <StephenLynx> GothAlice where in that it says facebook uses it?
[19:23:02] <StephenLynx> all information I get points to them using C++
[19:23:19] <GothAlice> StephenLynx: … search for "Facebook" on the page.
[19:23:37] <StephenLynx> "Tornado is one of Facebook's open source technologies. "
[19:23:47] <StephenLynx> which leads to fb which I don't have an account
[19:23:50] <GothAlice> keeger: Generally one keeps the replica set, and adds another replica set as a second shard. This way the data is still redundantly stored.
[19:27:33] <keeger> GothAlice, i ask because i think a replica set will work for what i want, but if the write performance bogs down, i want to be able to convert that set to shards. preferably without adding more servers
[19:28:08] <GothAlice> keeger: Sharding != automatic parallelization and improvement in performance.
[19:28:28] <keeger> GothAlice, i thought one big benefit to sharding was increasing write throughput capability
[19:29:12] <GothAlice> There are things you can do to your data (bad sharding indexes, for example) that remove any benefit of sharding. I.e. if your key determines that the next 1000 records inserted need to go to shard A instead of being balanced among shards.
[19:29:40] <StephenLynx> from a facebook engineer: "Facebook doesn't use Tornado for anything other than FriendFeed (who built the technology) at the moment."
[19:33:19] <StephenLynx> now they use this http://en.wikipedia.org/wiki/HipHop_Virtual_Machine
[19:33:50] <GothAlice> Then some other developers came by and used Python on PHP and doubled HHVM's performance. ¬_¬ HippyVM FTW.
[19:34:58] <StephenLynx> I can't find any reference to python on the wikipedia page, do you have any source for that?
[19:35:36] <GothAlice> StephenLynx: http://hippyvm.com/ — That's 'cause it's a separate project.
[19:36:10] <GothAlice> (It's PHP written in Python.)
[19:36:43] <StephenLynx> first: facebook still doesn't use python. that was what you claimed. second: if its twice as fast, why didn't facebook migrated to it?
[19:37:05] <GothAlice> StephenLynx: If ever we meet at a conference, I'll introduce you to Yannick, my Python buddy who works at Facebook doing Python. ;)
[19:37:44] <StephenLynx> facebook develops a lot of stuff, mostly because they merge different companies. it doesn't mean that they use it at facebook.
[19:37:46] <GothAlice> StephenLynx: You'd have to ask them. Likely there's a QA process for that multi-gigabyte executable of theirs that's rather strict.
[19:38:10] <StephenLynx> and its pretty clear by now that they don't use python.
[19:38:57] <StephenLynx> " At present it does not include a reasonable web server integration, so it's not usable for use in production in the current form. " :^)
[19:45:24] <GothAlice> StephenLynx: Final bit dug out of my open tabs: http://www.slideshare.net/pallotron/python-at-facebook-40192297 and https://www.quora.com/Why-did-Quora-choose-Python-for-its-development (there's irony in one of the links you used itself being powered by Python) Further Google-fu is up to you.
[19:45:54] <GothAlice> Alternatively, get the dictionaries to alter the definition of "obsolete" to better match your usage. ;)
[19:46:17] <StephenLynx> what is the relation of quora using python to the discussing?
[19:48:02] <GothAlice> T'was re: you supplying a "how does facebook use tornado" link… from quora… while having a discussion about Python being obsolete in your eyes.
[19:48:52] <StephenLynx> I don't get it. I never said that quora didn't used python, I said that facebook doesn't.
[19:49:14] <StephenLynx> and I got it from a facebook engineer.
[19:49:37] <StephenLynx> that slides don't have its context because I don't know what did the presenter said along them.
[19:49:59] <StephenLynx> it indicates facebook does have some relation with python, but it doesn't show they use it at the facebook application.
[19:52:40] <StephenLynx> keep in ming my central point is that python is obsolete >for web<
[19:53:11] <GothAlice> Disproved by one of the links you provided. Really, what's your definition of "obsolete"?
[19:53:26] <StephenLynx> which link that I provided disproves that?
[19:54:00] <StephenLynx> I understand something obsolete as being inferior in everywhere to one or more newer technologies.
[19:56:26] <StephenLynx> inferior is relative, because it may be inferior is some aspects but not in all aspects.
[19:56:40] <medmr> python isnt obsolete for web or for any particular domain
[19:57:14] <keeger> well i don't know how you were going to argue it was obsolete, and you just said above that you considered it to be inferior to newer tech
[19:57:46] <GothAlice> Indeed. Also, sorta by definition, a general purpose programming language can do anything another general purpose programming language can do, so then it comes down entirely to needs analysis for any given problem. StephenLynx: Quora built their site on Python, and the link I provided has one of the developers describe the rationale behind that decision. (So still in general use, and not out-of-date.)
[19:58:07] <medmr> and convention isnt superiority either
[20:00:15] <medmr> StephenLynx: way way to shallow of a look to be drawing the conclusions you are
[20:00:28] <medmr> python with gevents is fast, compares to node for concurrency
[20:00:31] <GothAlice> StephenLynx: Weird that Python is "as slow as PHP" yet a PHP interpreter written in PHP is 10x faster than the default runtime. I have to agree with medmr, here.
[20:00:41] <GothAlice> s/written in PHP/written in Python/
[20:01:19] <GothAlice> StephenLynx: You simply do not have the knowledgebase about Python needed to make the broad generalizations you are. (Concurrency is a solved problem in my haus, so, yeah.)
[20:01:54] <GothAlice> Yeesh, and that last SO link is about Django. Django isn't exactly a shining beacon of good software. :/
[20:02:22] <GothAlice> Django + Celery = a good way to kill your project part-way into development.
[20:03:04] <keeger> i have a friend that codes in python, and he says he likes it because it runs everywhere, apparently even on android and iOS
[20:03:25] <StephenLynx> that is true for any interpreted language.
[20:03:38] <GothAlice> keeger: Indeed. Statically linked VMs are a common thing for game scripting, most popular being Lua.
[20:03:43] <medmr> django is what i would call a good attempt and streamlining the development of CMS type apps... but doesnt address the grittier problems of scalability
[21:24:50] <daidoji> medmr: here's an easier way if your collection doesn't change very quickly
[21:25:17] <daidoji> use agg framework to get the count you want, $out into new collection, drop old collection, rename new collection to old collection viola!
[21:25:24] <daidoji> but they'll use different obj_ids
[21:25:39] <daidoji> you could use mapReduce to do something similar as well
[23:04:31] <GothAlice> Alas, http://stackoverflow.com/questions/41207/javascript-interactive-shell-with-completion < doesn't seem to be much in the way of "terminal-native" solutions to your problem.
[23:05:08] <agenteo> how do you guys fiddle with long queries? copy paste from vim/another editor?
[23:05:14] <GothAlice> Rhino IDE's shell seems to support it, as of: http://blog.norrisboyd.com/2009/03/rhino-17-r2-released.html (search for "shell" on this page)
[23:05:25] <agenteo> @GothAlice thanks I’ll check it out, I tried mongo-hacker but no highlight on typing
[23:06:27] <GothAlice> agenteo: I use the ipython enhanced interactive Python shell, which has highlighting, integrated clipboard support, on-paste reformatting, workbooks, parallelism up to and including cloud distribution of tasks/function calls, and integrated performance testing and scientific tools, with pymongo and the MongoEngine ODM.
[23:07:22] <GothAlice> (It has so much it's a 10MB+ package…)
[23:10:44] <GothAlice> Object-literal syntax is JSON-compatible, including (with appropriate imports) extended types like ObjectId, too, which makes copying and pasting chunks of examples really easy.
[23:15:06] <GothAlice> Topic is up-to-date and everything this time. :D
[23:15:14] <agenteo> @GothAlice look “Have you tried what we have in current versions – 2.0.3 and 2.1.0? It highlights the matching brace as you move the cursor left and right. It highlights only the matching one when the cursor is over a brace/bracket/parenthesis “
[23:17:31] <agenteo> are you on OSX or Linux? I am on OSX 10.10.1, in tmux and I see no matching brakets
[23:17:31] <GothAlice> fewknow: I'm a "show me the numbers" kinda person. :3 I'd love to know if/when/where you will publish your results.
[23:17:38] <keeger> but my stupid cat is on the mouse
[23:17:46] <GothAlice> keeger: Well, I'll get to finally deprecate my own compression implementation. ¬_¬
[23:17:51] <fewknow> GothAlice: will let you know....not my team...but I want to see the numbers too
[23:18:38] <GothAlice> agenteo: I'm on OSX using brew-installed MongoDB 2.6.7, SSL enabled using stock Terminal.app, no screen multiplexer.
[23:19:07] <GothAlice> (Under a heavily customized zsh configuration: https://github.com/amcgregor/snippits/tree/zsh#readme that also includes syntax highlighting and friends.)
[23:21:13] <agenteo> strange… same brew installed here, tested on stock terminal and no matching. We’re talking about a subtle bolded font when you type or hover a } right? Can anybody else confirm this matching parenthesis?
[23:21:22] <agenteo> that’s exactly what I am after
[23:35:56] <keeger> i notice that Go is left off the driver compatibility charts
[23:36:03] <keeger> yet the tools for 3.0 were written in Go?
[23:38:14] <GothAlice> keeger: Alas, I lack knowledge on that particular subject. Go isn't a focus of mine at the moment.
[23:38:43] <GothAlice> I'm sure driver compatibility deficiencies will be addressed rapidly now that 3.0.0 has been released.
[23:39:23] <GothAlice> (Votes on JIRA tickets do help prioritize where the effort goes… I hope. ;)
[23:39:25] <keeger> i think it's a doc thing heh, pretty sure if ops manager was written in Go and updated for 3.0...the driver works
[23:41:27] <keeger> thinking it's just s a doc issue. anyways, it's not clear to me, but is wiredtiger supposed to be faster than the mmap engine for high writes?
[23:42:14] <GothAlice> In cases where the write lock was a concern formerly, yet document-level locking is acceptable, certainly.
[23:42:40] <keeger> hmm, write lock as a concern..
[23:42:46] <GothAlice> Certain other patterns of use would seem to indicate degraded performance. I'll dig through the chat log for today to dig up the relevant section for you.
[23:46:44] <GothAlice> It's a good nick. A solid nick. A full set of personal pronouns, even.
[23:47:06] <keeger> always good to see Steelers fans on the net
[23:47:22] <JamesHarrison> keeger: ah, right. yes, that confusion is very helpful :)
[23:47:34] <GothAlice> JamesHarrison: Happens a lot? XP
[23:47:44] <JamesHarrison> every time there's a bloody game on my twitter melts
[23:48:18] <JamesHarrison> (I am also @JamesHarrison there, as opposed to the footballer slumming it at @jharrison9292 or something)
[23:48:30] <JamesHarrison> this is lost on many american football fans.
[23:49:14] <GothAlice> Aaaah. Sounds like the poor fellow with the @rogers handle… http://www.citynews.ca/2013/10/10/man-with-rogers-twitter-handle-bombarded-during-wireless-outage/