PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 3rd of March, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:01:47] <ParkerJohnston> have another issue…trying to updated an embeded document
[01:01:47] <ParkerJohnston> caseInformation.findOneAndUpdate(caseInformation.find(eq("history._id", id)).projection(new Document("history", 1).append("_id", 0)).first(), new BasicDBObject("$set", new BasicDBObject("history.phone", "5559874321")));
[01:02:02] <ParkerJohnston> caseInformation.findOneAndUpdate(caseInformation.find(eq("history._id", id)).projection(new Document("history", 1).append("_id", 0)).first(), new BasicDBObject("$set", new BasicDBObject("history.active”, false)));
[01:02:22] <cheeser> history is an array of documents not a document itself
[01:02:34] <cheeser> http://docs.mongodb.org/manual/reference/operator/update-array/
[01:03:18] <ParkerJohnston> i am not adding to it..i am finding one and trying to update it..not in an array
[01:04:43] <cheeser> this is the same document you showed earlier? because the history field then was an array of documents.
[01:04:59] <ParkerJohnston> it is
[01:06:36] <ParkerJohnston> sorry for the newbie questions..just want to figure it out and finding a single object in the array should be all i need
[01:06:39] <ParkerJohnston> and then update it
[01:06:49] <ParkerJohnston> i come from a very deep SQL background
[01:07:50] <cheeser> read the link...
[01:08:02] <ParkerJohnston> i am
[01:09:52] <ParkerJohnston> i have it updating (replacing everything) not maintaining the value
[01:10:35] <cheeser> yes. you need something like 'history.$.active'
[01:11:32] <ParkerJohnston> caseInformation.findOneAndUpdate(caseInformation.find(eq("history._id", id)).projection(new Document("history", 1).append("_id", 0)).first(), new BasicDBObject("$set", new BasicDBObject("history.$.active", true)));
[01:11:39] <ParkerJohnston> have that…and it is failing
[01:12:18] <cheeser> i don't know the 3.0 API well enough yet to make sense of all that.
[01:12:39] <ParkerJohnston> so it finds the item i need and then goes to update it
[01:12:49] <cheeser> and it's 1am here. so i'm gonna shuffle off to bed.
[01:12:54] <ParkerJohnston> hahaha
[01:12:59] <ParkerJohnston> night thanks!
[01:13:03] <cheeser> good luck
[01:13:09] <ParkerJohnston> we will see what happens
[01:13:49] <joannac> ParkerJohnston: define "failing"
[01:13:58] <joannac> the syntax cheeser gave you looks fine
[01:14:09] <ParkerJohnston> it is not updating the DB
[01:14:21] <ParkerJohnston> caseInformation.updateOne(caseInformation.find(eq("history._id", id)).first(), new BasicDBObject("$set", new BasicDBObject("history.$.active", true)));
[01:14:26] <joannac> does it match a document?
[01:14:46] <ParkerJohnston> yes..it finds the proper id of teh embeded doc
[01:15:04] <ParkerJohnston> and then goes update and fails..should do two of them, but stops running after 1
[01:15:06] <joannac> um, what?
[01:15:22] <joannac> why do you have a find inside an update?
[01:15:40] <joannac> what language is this?
[01:15:40] <ParkerJohnston> not sure
[01:15:42] <ParkerJohnston> java
[01:15:55] <joannac> also, what ... what does "should do two of them, but stops running after 1" mean?
[01:16:04] <joannac> should do 2 documents, or 2 array entries
[01:16:12] <ParkerJohnston> two array entries
[01:16:16] <joannac> nope
[01:16:17] <ParkerJohnston> i am calling this method twice
[01:16:25] <joannac> oh, then maybe
[01:16:30] <ParkerJohnston> not looking to do a bulk
[01:16:38] <joannac> anyway, you're doing it wrong
[01:17:02] <joannac> what's updateOne() ?
[01:17:16] <joannac> nvm
[01:17:30] <ParkerJohnston> trying that or findOneandUpdate
[01:20:19] <joannac> caseInformation.update(new BasicDBObject("history._id" : id), new BasicDBObject("$set", new BasicDBObject("history.$.active")
[01:20:27] <joannac> and close the brackets
[01:21:49] <joannac> the way teh $ operator works is from the initial query part
[01:22:08] <joannac> if you do your own query and return the result, how will mongodb know which array entry you mean?
[01:22:27] <ParkerJohnston> ok
[01:23:09] <ParkerJohnston> cant get that to work
[01:23:46] <joannac> caseInformation.update(new BasicDBObject("history._id" : id), new BasicDBObject("$set", new BasicDBObject("history.$.active"). true ))
[01:24:23] <ParkerJohnston> what version mongo?
[01:24:25] <ParkerJohnston> driver
[01:26:30] <joannac> ParkerJohnston: ?
[01:26:38] <ParkerJohnston> 3.0?
[01:26:52] <joannac> I ahve no idea what version of the java driver you're running
[01:26:57] <ParkerJohnston> still not working
[01:27:04] <ParkerJohnston> i am running 3.0 beta 3
[01:27:13] <joannac> why are you running a beta?
[01:27:18] <joannac> also, what does "not working" mean?
[01:27:33] <ParkerJohnston> it is not updating the value
[01:27:34] <joannac> it doesn't compile? it doesn't update? your computer melted?
[01:27:49] <ParkerJohnston> running beta as that is what is in RC
[01:28:03] <ParkerJohnston> and we are just starting this app so save rewrite tiem
[01:28:05] <ParkerJohnston> tiem
[01:28:07] <ParkerJohnston> time
[01:28:27] <joannac> have you confirmed it matches something?
[01:28:58] <ParkerJohnston> yes..i am getting a macthing history id of a value in the array
[01:29:19] <joannac> change the update to a find and pastebin the output
[01:29:32] <joannac> (and remove the actual update part as well)
[01:31:58] <ParkerJohnston> http://pastebin.com/FhtYwVun
[01:32:59] <joannac> ow can you tell that's not updated?
[01:33:07] <joannac> both documents have active: true
[01:34:36] <ParkerJohnston> :FACE PALM:
[01:34:49] <ParkerJohnston> been at this for 28 hours…time to quit
[02:25:58] <xissburg> woo
[04:40:55] <Jonno_FTW> does the geospatial support anything but wgs84?
[05:18:44] <keeger> if i have a document that holds a reference to another document, can i reference the collection of the reference like b[objectId] ?
[05:18:48] <keeger> or do i have to use findOne
[05:19:11] <georgij> Hi, I am using mongoose. I want to be able to do something like this with level being a ref. Elem.where('level').equals(5/*id ref of 5*/).where('level.parent').equals(10).exec(foo) /*it populates level when querying an inside property of level*/
[05:19:42] <georgij> is this possible?
[05:21:32] <georgij> or do I have to make two seperate queries? bwaaah, I don't wanna.
[05:21:49] <georgij> separate*
[05:23:29] <joannac> keeger: huh?
[05:25:30] <keeger> lets say i have a collection of states, and a collection of cities. if I do state: { cities: [ cityObjectId, cityObjectId ] }
[05:26:12] <georgij> sorry to clarify this is what I am trying to do:
[05:26:16] <georgij> Element.where('level').gte(req.query.range[0]).lte(req.query.range[1]).populate('level').where('level.chapter').equals(req.query.chapter).exec(foo);
[05:26:23] <keeger> let's say i want to grab the document for the city at cities[0], can i do: cities[cityObjectId] to get it, or do I need to do cities.findOne(cityObjectId)
[05:26:47] <joannac> keeger: erm, what language / ODM?
[05:27:12] <keeger> i was thinking mongo shell atm, i'll be using golang driver for this
[05:27:26] <joannac> in the shell, no. you ahve to do a findOne()
[05:27:55] <joannac> in general, even if there was a shortcut, it would still be a findOne() underneath
[05:28:14] <keeger> ok that's the answer then :)
[05:28:49] <joannac> georgij: how mongoose populates stuff, I have no idea
[05:28:54] <keeger> i am trying to layout my schema, but i have some arrays of common data. i am considering the pros/cons of putting those in their own collections
[05:29:11] <joannac> you could try #mongoosejs, or you could wait to see if someone who knows mongoose can help
[05:29:29] <georgij> joannac: Thanks, I always manage to get into the wrong channel :)
[05:29:31] <joannac> keeger: pros - can't hit document limit. cons: more queries
[05:30:17] <keeger> joannac, i don't think i'll hit the doc limit really, but worried about doing 2 phase commits all over the place
[05:32:00] <keeger> and also doing a lof of queries hehe
[05:36:59] <keeger> does mongo support a concept of a view? like where I could query a document, and follow references ?
[06:10:32] <arussel> what do I put as _id in aggregate $group if I want to sum all documents ?
[06:11:08] <arussel> I don't really want to group, I just want a sum of a field of all matched documents
[06:12:38] <joannac> $group: {_id :1}
[06:12:59] <joannac> or anything that's a constant
[06:15:36] <arussel> joannac: thanks
[07:34:13] <Waheedi> attitude :)
[10:20:56] <pamp> hi
[10:25:27] <pamp> I need to rename a field , this field is inside an array, for example: P[i].V[i]._t, but in most cases the field V is not an array, example P[i].V
[10:25:48] <pamp> I need to rename the field "V", only when this is an array
[10:26:10] <pamp> http://dpaste.com/3415GT4 I creat this method
[10:27:01] <pamp> but when i verify if the field V is an array, and is not. I get an error "TypeError: d.P.v has no properties (shell):8"
[10:27:07] <pamp> how can I do that
[10:27:12] <pamp> thanks in advance
[10:30:48] <pamp> Its possible verify with an "IF", if a field is type array, object or not?
[10:35:15] <ams__> Is it OK for me to query for $lt "2012-11-10" on a string? i.e. look for date range without using ISODate
[10:44:19] <Sticky> ams__: I highly doubt it
[10:44:34] <Sticky> in fact I will just go with no
[10:46:03] <ams__> Sticky: It seems to work fine
[10:52:56] <Sticky> ams__: do your dates do "2012-11-01" or "2012-11-1" ?
[11:39:31] <ams__> Sticky: 01 over 1
[11:43:23] <d0x> hi, i need to process a few GB with a map reduce in a within seconds. on a single node it's to slow, so when i shard it, over two nodes and choose a shard key that splits it equally 50:50, its runtime should be almost a half, or?
[11:43:46] <d0x> s/in a//
[12:14:30] <boolman> Will there be any downtime if I add a arbiter node ( rs.addArb(HOST) ) to a two-node PRIMARY-SECONDARY "kluster" ? eq. re-election
[13:16:22] <Cygn> Hey everyone, i am trying to count the amount of values in a subarray of my documents, but right now i can only figure outhow to count the documents, has someone a hint for me?
[13:16:31] <Cygn> http://pastie.org/9995819 < Code and Data Example
[13:17:32] <Cygn> Right now it returns 2, but should return 3 (since the criteria matches 3 items in the sales subarrays of the data)
[13:17:45] <StephenLynx> unwind.
[13:17:54] <StephenLynx> so you will have one document per array element.
[13:18:01] <StephenLynx> then you do the same as a regular document.
[13:18:57] <Cygn> StephenLynx: So i basically to 1.match 2. unwind 3. group/count ?
[13:19:03] <Cygn> *to=do
[13:19:25] <StephenLynx> yes, that would be faster than first unwinding and then matching.
[13:19:43] <StephenLynx> just keep in mind that it will give you zero documents if the array doesn't exist.
[13:19:49] <StephenLynx> if I'm not mistaken.
[13:20:01] <StephenLynx> haven't used it enough to be sure.
[13:22:27] <Cygn> StephenLynx: thx :) i will try that right now (and also find out what happens if empty)
[13:26:21] <Tomasso> im running mongo from the binary tgz , I attempt to connect with robomongo. The server says connection accepted, and robomongo says unable to connect... what could be wrong?
[13:26:40] <Cygn> StephenLynx: This works beatiful !
[13:27:46] <StephenLynx> Tomasso can you connect with the CLI command?
[13:27:56] <StephenLynx> it could be authentication issues?
[13:28:36] <Tomasso> StephenLynx: mmm i never setup authentication on server, at least yet..
[13:29:01] <StephenLynx> first, is the connection remote or local?
[13:29:16] <StephenLynx> because by default mongo is bounded to local connections only on 127.0.0.1
[13:29:36] <Tomasso> remote.. I also tried ./mongod --bind_ip 0.0.0.0
[13:29:58] <StephenLynx> if you are going to do that, make sure you set up authentication
[13:30:07] <StephenLynx> otherwise anyone will be able to connect to your db.
[13:30:19] <StephenLynx> and have you restarted the server after that?
[13:31:45] <Tomasso> mm yes.. also tried to check authentication in robomongo, without user or password.. and same result..
[13:32:29] <Tomasso> and on server side I dont get any errors.. just connection accepted
[13:34:06] <StephenLynx> if you have unbound it from localhost and restarted, then I don't know.
[14:11:32] <d0x> hi, i need to process a few GB with a map reduce within a "short" amount of time. On a single node it's to slow, so when i shard it, over n nodes and choose a shard key that splits it equally, its runtime should be divided by almost "n", or?
[14:19:34] <cheeser> https://www.mongodb.com/press/mongodb-30-and-mongodb-ops-manager-now-generally-available
[14:20:15] <cheeser> d0x: if you "need" mapreduce you should use hadoop. alternately, aggregation might work for (dunno your processing needs) and would be faster.
[14:35:08] <Rickky> Morning
[14:36:16] <g-hennux> hi!
[14:36:31] <StephenLynx> i know for a fact that sharding has some limitation. like when you do a group operation
[14:36:40] <g-hennux> is there a way to run mongod so that it will only initialize the database and then exits?
[14:37:10] <Rickky> I'm trying to pipe the output of a mongodump command directly into mongorestore in this format " mongorestore --username user -ppass --db DB_B --collection collection <(mongodump --username user2 -ppass --db DB_A --collection collection --out - 2>/dev/null | tail -n+2)" as suggested in https://jira.mongodb.org/browse/SERVER-4345
[14:37:23] <Rickky> the output I'm getting though: "connected to: 127.0.0.1 don't know what to do with file [/dev/fd/63]"
[14:39:01] <g-hennux> like, i need one command (for use in a Dockerfile) that will initialize the data in /var/lib/mongodb and then exit with exit code 0
[14:49:51] <pamp> it's possible open mongodb from shell, when it is already runnig as a service?
[14:50:02] <pamp> mongod*
[14:51:42] <cheeser> huh?
[14:52:03] <cheeser> you want to start the shell? or the database itself?
[14:57:53] <StephenLynx> this is a wild guess, but I suppose it would lock its files to prevent that.
[14:58:14] <StephenLynx> since two instances working with the same files would probably damage it.
[14:58:24] <StephenLynx> pamp
[15:00:55] <StephenLynx> assuming you are not talking about the CLI client.
[15:00:59] <StephenLynx> and the server itself.
[15:13:54] <pamp> I want to see in the shell whats's happening on the server, instead of seeing the log file, but, I can't stop the instance (mongod)
[15:15:05] <GothAlice> pamp: Tail the oplog. Ref: http://docs.mongodb.org/manual/core/replica-set-oplog/ and https://github.com/cayasso/mongo-oplog as a simple way to interrogate the data stream.
[15:15:30] <d0x> cheeser: With using hadoop i need to maintain a 2nd infrastructure that loads all the data and proccess it. And the Aggegation framework has not enough features (no Userdefined functions, not enough String methods) and our application (saidly) doesn't precalculated them (like extracting the domain out of an URL)
[15:16:36] <d0x> But is my initial assumption right that the MR speed over all documents will be divided by N when using N shards?
[15:16:50] <d0x> Also will it scale almost linear?
[15:17:00] <d0x> s/also//
[15:17:12] <GothAlice> d0x: Pro tip: (de)normalize your data properly and things become much easier. Pre-aggregation is freakishly awesome, and lets me produce a dashboard (like http://cl.ly/image/2W0a2D3I370F) running over 200 individual queries that still generates < 100ms.
[15:17:19] <GothAlice> d0x: Let me dig up a link for you on parallelization of map/reduce vs. aggregates.
[15:17:31] <GothAlice> d0x: http://pauldone.blogspot.ca/2014/03/mongoparallelaggregation.html here you go
[15:19:13] <d0x> GothAlice: I use the MR job to preaggregate the data (like i said extracting the domain out of a string url which can't be done by the Aggegration framework)
[15:20:20] <d0x> You say everytime the "application" should write the data in a proper format. But that is not possible here... It has it's schema which is optimised for daily production
[15:20:52] <d0x> and now i need to transform this data (preaggegation etc) to make aggregation queries run
[15:21:40] <d0x> And going to the boss saiy i need a another hadoop cluster for this is not that nice :(. Because of that I thought i can utalise our mongodb infrastcuture
[15:21:52] <d0x> s/saiy/saying/
[15:24:00] <GothAlice> d0x: The article I linked goes into some detail on the why of map/reduce execution not really being parallel out-of-the-box. Even when sharded, each shard needs to run the map/reduce itself, then the query router takes the results from each shard and further reduces. This means you're still effectively waiting for all data to get processed. You can gain better efficiency by chopping up the workload beforehand and running multiple "jobs"
[15:24:00] <GothAlice> in parallel.
[15:25:35] <GothAlice> (This goes for both map/reduce _and_ aggregates.)
[15:34:45] <d0x> When the map is distributed across all shards containing the data, then we have it (i think). Because currently the I do all the magic in the "map" and the reduce returns the first value only: http://pastebin.com/FsSQxer4
[15:35:02] <d0x> But I better read and understand now all the links you gave me
[15:35:07] <d0x> Before i continue.
[15:35:20] <GothAlice> They include benchmark comparisons.
[15:35:22] <ehershey> 3.0!
[15:35:29] <GothAlice> 3.0 indeed.
[15:35:55] <GothAlice> :)
[15:36:02] <ehershey> hehe
[15:36:11] <GothAlice> T_T Now to spend two months migrating my 24 TiB dataset…
[15:36:33] <GothAlice> 26, rather.
[15:38:53] <StephenLynx> HORY SHEEEET 3.0 :v
[15:39:07] <GothAlice> Indeed. There was much rejoicing in the streets.
[15:39:29] <GothAlice> Also, I finally get to throw out my custom compression scheme. Yaaay!
[15:39:29] <StephenLynx> gonna bring my laptop to work to update it tomorrow
[15:39:43] <StephenLynx> still no internet at home :|
[15:39:49] <GothAlice> That's no good.
[15:39:57] <StephenLynx> its inconvenient, indeed.
[15:40:15] <StephenLynx> but is much less of an issue than I thought it would be.
[15:40:23] <GothAlice> ehershey: I take it MMS is updated to support 3.0 deployments? Eh? Eh?
[15:41:05] <ehershey> it definitely should be
[15:41:27] <ehershey> I didn't even want to migrate 1tb
[15:41:48] <ehershey> but I'm a wimp
[15:41:58] <NET||abuse> hey guys, just doing a quick mongo primer, looking to do a restore of backedup data, so i dont need to shutdown mongodb server to do restores from mongodump?
[15:42:27] <StephenLynx> any word on the ubuntu repository? I'm using mongo's version, I guess, not the default one.
[15:42:29] <GothAlice> NET||abuse: There are two approaches: offline restore, and online restore. In an offline restore mongod must not be running, and mongorestore directly writes to the on-disk stripes.
[15:42:45] <GothAlice> NET||abuse: In online mode, it simply connects to a running mongod and dumps the data back in using standard wire protocol commands.
[15:43:58] <GothAlice> In both you can choose to write the data to a database other than the source database. With online mode this allows you to easily restore isolated snapshots which are separate from the general production database.
[15:46:09] <NET||abuse> hmm, that's somewhat useful.. thank you.
[15:46:10] <StephenLynx> nvm, I had to setup my repository information
[15:46:48] <NET||abuse> if i have 3 servers, and do a restore on the master (presumably htat's where you always want to do your restore) it gets sync'd to the secondary's?
[15:47:13] <NET||abuse> i guess if you're doing it using standard wire protocol then of course it will.
[15:47:36] <NET||abuse> is there a page in the docs for doing an offline mode restore?
[15:47:49] <GothAlice> NET||abuse: Correct. I do not believe mongorestore writes an oplog when restoring in offline mode, so the former secondaries will suddenly need to re-stream the entire dataset on next start.
[15:48:07] <NET||abuse> yeh, i figured that's true,,
[15:48:53] <NET||abuse> if i stop the primary, wipe out all the /data/mongodb/* files, and copy the backup files straight into place, restart that master, will i have to reconfigure some things? replica set nodes or things?
[15:50:00] <GothAlice> NET||abuse: Huh, I can't seem to find it in the online manual for mongorestore, but the option you're looking for is --dbpath
[15:50:10] <GothAlice> (That switches to offline mode, writing to stripes specified by the given path.)
[15:50:19] <NET||abuse> yeh, that's why i'm asking, couldn't find offline mode options
[15:50:37] <NET||abuse> GothAlice: so i should stop the mongodb server first then do that?
[15:50:46] <GothAlice> NET||abuse: Also, mongodump dumps can't be restored by simply swapping them into place.
[15:51:32] <GothAlice> NET||abuse: Offline mode restore will be faster than online. Downside: each replica secondary will need to completely re-sync rather than trying to follow along during the online restore.
[16:11:55] <cheeser> d0x: you can use hadoop directly against mongo if you want.
[16:12:12] <cheeser> you'd still have that second infra but you wouldn't have to shuttle data around at least
[16:15:55] <d0x> cheeser: You mean instead of HDFS it uses the mongodb? And on all Mongoshards i could install an tasktracker?
[16:16:13] <d0x> aah, ok
[16:16:21] <dirtyh> MongoDB 3.0, the excrements of Eliot Horowitz?
[16:16:28] <d0x> you just said "second infra"
[16:16:46] <dirtyh> time to put mongodb 3 back into eliot's vagina
[16:16:51] <GothAlice> dirtyh: Oh, it's you again.
[16:16:57] <dirtyh> mongoldb, from pussies for pussies
[16:17:04] <cheeser> d0x: yep
[16:17:09] <cheeser> Derick: ping
[16:17:25] <dirtyh> little boys calling for mum?
[16:17:44] <dirtyh> mum is liking the balls of uncle eliot
[16:20:17] <dirtyh> oh yeah, the mongodb scum in one place
[16:20:22] <dirtyh> time to flush this sink
[16:21:00] <dirtyh> will be back soon, until them: watch your moron Derick licking the smelly balls of Eliot
[16:22:00] <GothAlice> It boggles the mind how much time people are willing to waste on negativity.
[16:22:40] <d0x> cheeser: do you know whether the same applies for spark (sql)?
[16:24:26] <bwman1> /join #docker
[16:24:32] <Derick> Bugstah: bug *!*@46.19.136.140
[16:24:36] <Derick> what? :-)
[16:29:01] <GothAlice> Oh my.
[16:34:34] <jeho3> hi mongodb scum, Derick finished giving Eliot as blowjob?
[16:35:53] <Derick> GothAlice: it works :)
[16:36:08] <wqieuwq> hi GothAlice, you slut, I love your balls, wanna suck my cock?
[16:36:57] <oiuzio> GothAlice, suck my cock, slut
[16:37:14] <Derick> sigh
[16:37:28] <GothAlice> It's like a game of wack-a-mole.
[16:37:58] <mongoid> GothAlice, Derick: wanna suck my cock? Let's have a 3way,
[16:41:43] <EliotH> Bonjour the France, I am your master Eliot, suck my cock please
[16:41:46] <EliotH> LOL
[16:41:51] <EliotH> scum
[16:41:55] <EliotH> clueless admins
[16:42:26] <GothAlice> Derick: You are faster updating your macros than I am. XD
[16:42:36] <Derick> i type fast, haven't upgraded the macro yet ;-)
[16:42:52] <MongoSux> MongoSux
[16:42:54] <MongoSux> MongoDB Sucks
[16:42:59] <MongoSux> eliot H sucks
[16:43:03] <MongoSux> lick my balls
[16:44:07] <qwwweq> mongodb sux
[16:44:09] <qwwweq> admins sux
[16:44:36] <Derick> GothAlice: perhaps use .* at the end of IP too?
[16:45:15] <StephenLynx> mongo has probably killed that guy's dog, hes been shitting around for weeks
[16:45:16] <GothAlice> Yeah. That gets a bit indiscriminate, though. :/ I once banned all of Italy for 15 minutes. XD
[16:45:22] <StephenLynx> lolol
[16:46:04] <GothAlice> StephenLynx: You're going for the kicked dog hypothesis, I'll go for the spurned lover approach. ;)
[16:46:33] <StephenLynx> lol
[16:47:42] <GothAlice> Can't wait until work is over so I can play with the 3.0.0 release. :3
[17:05:14] <dcuadrado> the best way to upgrade to wiredtiger is to upgrade the secondary instances first, then step down and upgrade the master, right?
[17:05:34] <dcuadrado> congrats for the new release btw
[17:06:54] <Derick> dcuadrado: yes, that's the best procedure
[17:07:02] <Derick> dcuadrado: do test it in dev/staging first though!
[17:07:20] <dcuadrado> yeah
[17:07:53] <Derick> wiredTiger will have different performance characteristics
[17:08:33] <fhainb> weirdTiger
[17:10:53] <Cygn> Can i adress attributes of a subarray in a projection? f.e. array.attribute or array['attribute'] - if i want only attribute to be returned?
[17:11:09] <dcuadrado> Derick: are you gonna make wiredtiger the default engine for 3.1?
[17:15:28] <Derick> dcuadrado: dor 3.2, yes, I think that's the plan
[17:15:44] <Derick> Cygn: yes, with the dot
[17:20:02] <Cygn> Derick: if i try that i get "unexpected token .", sec i give you a paste
[17:20:51] <Cygn> Derick: http://pastie.org/9996257
[17:20:52] <Derick> Cygn: I think you need '' around the key in that case
[17:21:05] <Derick> yes, 'sales.origin' : 1
[17:26:15] <Cygn> Derick: Thanks !
[17:29:49] <SpartanWarrior> hello guys, I tried converting my standalone mongod to a replica set, added the oplog size and replset name to mongod.conf and when restarted, I lost all my data! the /data directory however haves some files with my db name, any hints?
[17:40:20] <fhainb> restore from backup
[17:44:18] <kbiddyBoise> So, just updated from 2.6.5 to 2.6.8, and not mongo isn
[17:44:33] <kbiddyBoise> 't accepting connections, server log just spits out "pthread_create failed: errno:11 Resource temporarily unavailable"
[17:44:44] <kbiddyBoise> any ideas would be much appriciated
[17:45:57] <Cygn> Is there also a possibility to select a attribute of a subarray when using distinct? My Documents have an attribute sale, which contains an array, that has multiple entries. I need to get every distinct attribute origin of all subarrays "sales" of all documents… but collection.distinct('sales.origin') seems not to be the right way to handle this.
[17:50:48] <JamesHarrison> okay, so here's a fun question - I'm seeing exceptionally slow responses on a newly migrated 3.0 WiredTiger install where the skip value is high (>90,000, for instance) in an otherwise simple query (single field equivalence query, single field order by)
[17:51:42] <JamesHarrison> is this a known issue, or am I doing something wrong? (Assuming that the skip field is there to be used)
[17:55:12] <GothAlice> JamesHarrison: Skip operations are log(n) operations, requiring (best case) the database to traverse a B-Tree index. Worst-case skip is a O(n) problem. In general it is far better to re-query with a natural offset (i.e. _id > previous_page[-1]._id) than to use a real skip.
[17:55:39] <JamesHarrison> GothAlice: yeah, reading https://jira.mongodb.org/browse/SERVER-13946 it looks like I'm going to have to do that for now at least
[17:55:59] <JamesHarrison> this seems to have gotten _much_ worse though, queries in MMAPv1/2.6.4 were <1s, are now >50s
[17:56:10] <JamesHarrison> (on 3.0/WT)
[17:56:28] <GothAlice> You do seem to be using an unusually large offset.
[17:56:43] <JamesHarrison> The application is pagination on a forum - some topics have _lots_ of replies
[17:57:21] <GothAlice> JamesHarrison: My forum software does one of those "infinite scrolling" things, it uses _id $gt range querying to fetch batches of results.
[17:58:19] <JamesHarrison> I'll have to refactor to do something similar, aye - the pagination is done at present by a 3rd-party library (kaminari, in Ruby) so that's where that query is coming from
[17:58:31] <GothAlice> Oh, Ruby.
[17:58:37] <StephenLynx> lol ruby
[17:59:09] <StephenLynx> one of these days I'm going to study ruby enough to bash it like I do with python.
[17:59:36] <JamesHarrison> all I'm actually concerned about is the fact that MMAPv1/2.6.4 appears to be 50 times faster than WT/3.0 on this particular use-case, and if that's considered acceptable or if I should raise a ticket
[17:59:42] <JamesHarrison> I know I can engineer around it
[18:00:26] <GothAlice> Certainly raise the warning flags by opening a ticket. 50x reduction in speed should be classified "unusual".
[18:00:38] <StephenLynx> I would raise a ticket. if the change was intended, someone will expose the reason.
[18:00:47] <JamesHarrison> sorry, 500 times faster, I misread my stats, was managing 100ms responses prior to that
[18:01:03] <JamesHarrison> I'll do that
[18:01:25] <StephenLynx> but being a different engine and all, I wouldn't be surprised it it's a bug.
[18:01:47] <StephenLynx> but one thing I can tell you is that your design is not optimal.
[18:02:32] <JamesHarrison> StephenLynx: yeah, I'm aware, if I had infinite time I'd have fixed it but it wasn't an issue till this change :)
[18:03:52] <xissburg> I want a computer with infinite processing power
[18:04:22] <Cygn> Is there a way to get distinct values of an attribute of subarrays of documents?
[18:05:02] <Boomtime> aggregation
[18:05:03] <GothAlice> Cygn: $unwind and $group in an aggregate query should do it.
[18:05:08] <Boomtime> ^
[18:05:15] <Cygn> Okay, could have thought of it myself ;)
[18:05:24] <Cygn> Thanks for the hint
[18:07:41] <StephenLynx> theres 3 documentation pages that I keep for offline reference:
[18:08:03] <StephenLynx> query and projection operators and aggergation operators
[18:09:37] <StephenLynx> I have the manual pdf too, but its pretty useless
[18:09:44] <Cygn> StephenLynx: Could be a good idea. Anyway, da after day using mongo i kinda get it now, just from time to time something isn't clear for me.
[18:10:20] <StephenLynx> it will be a while since you memorize all operators and how they work more or less. specially aggregation, it has lots of operators.
[18:10:39] <StephenLynx> until you memorize*
[18:55:35] <Cygn> StephenLynx: You're right, i just printed a cheatsheet :)
[19:02:53] <cobra-the-joker> Hey there every one , how can i make select * from <table> in mongoDB ? db.collection.find ( {} ) .... where do i write the collection name ?
[19:03:34] <rkgarcia> cobra-the-joker, the collection it's created in air
[19:04:10] <rkgarcia> if you write in them
[19:05:11] <keeger> i'm looking at cluster strategies with mongo. if i start off with a replicate set, how hard is it to convert to a sharded cluster?
[19:05:35] <cobra-the-joker> rkgarcia: i created a collection in a .js file that was run on the server and now i want to make "select * from <that collection> "
[19:06:28] <StephenLynx> cobra-the-joker node.js or io.js?
[19:06:50] <cobra-the-joker> StephenLynx: Node
[19:07:19] <StephenLynx> first you need a connection to the db, since you created the collection I assume you are already over that step, right?
[19:07:39] <cobra-the-joker> StephenLynx: yes
[19:09:25] <StephenLynx> ok, then you need a pointer to the collection: var collection = db.collection('yourCollectionName');
[19:10:43] <StephenLynx> notice that this step will work either if the collection exists or not
[19:11:12] <StephenLynx> the collection will be created if it doesn't exist and deleted if it remains empty after you work with it
[19:11:36] <StephenLynx> after that you call functions on the collection variable, such as find or aggregate
[19:11:55] <StephenLynx> for find you will need to pass two object parameters and a function to be used as callback
[19:12:08] <StephenLynx> the first object parameter is the query one, the second the projection one
[19:12:44] <StephenLynx> I think it is intelligent if you don't pass some parameters, but I am not sure how much.
[19:13:07] <cobra-the-joker> StephenLynx: ok checking now
[19:13:40] <StephenLynx> after the operation is completed, the callback will be executed, your function must have two parameters for find: the first one will be the error and the second a cursor to the found results.
[19:14:02] <StephenLynx> so you can check if the error exists, if it doesn't, you know the operation succeeded.
[19:14:47] <StephenLynx> I don't know how to work with cursors because I always use aggregate.
[19:16:03] <StephenLynx> the main difference in the results is that aggregate always return a list with all results and find returns the cursor. so if you need to iterate through many objects, aggregate will eat your RAM.
[19:16:27] <StephenLynx> but if you need to output this data anyway, you can use aggregate without this issue.
[19:16:43] <StephenLynx> on the other hand, aggregate makes it easier to perform secondary operations, such as sort and limit.
[19:17:04] <StephenLynx> with find you need to call additional functions in your code that require callbacks.
[19:17:23] <StephenLynx> 2 pieces of advice since you are using node:
[19:17:34] <StephenLynx> 1- change to io.js. node.js is obsolete.
[19:17:54] <GothAlice> StephenLynx: Your definition of obsolete is suspect.
[19:17:56] <StephenLynx> 2- use this coding standards: https://github.com/felixge/node-style-guide
[19:18:29] <StephenLynx> python is obsolete for web.
[19:19:44] <GothAlice> Yup, totally why Facebook is using it to power their realtime stuff. ("Tornado" framework.) Amongst others. ;)
[19:21:13] <keeger> i was going to use node for my project
[19:21:13] <StephenLynx> fb uses C++
[19:21:25] <keeger> but i decided to try golang. i am pretty happy with it so far
[19:21:36] <StephenLynx> yeah, go is good too.
[19:21:41] <GothAlice> StephenLynx: They certainly do. Also Python. (https://github.com/tornadoweb/tornado ;)
[19:21:48] <StephenLynx> better for CPU intensive projects
[19:22:08] <keeger> all my projects are .. intense
[19:22:11] <keeger> lol
[19:22:30] <keeger> GothAlice, is changing a cluster from replica set to sharded difficult in mongo?
[19:22:54] <StephenLynx> GothAlice where in that it says facebook uses it?
[19:23:02] <StephenLynx> all information I get points to them using C++
[19:23:19] <GothAlice> StephenLynx: … search for "Facebook" on the page.
[19:23:37] <StephenLynx> "Tornado is one of Facebook's open source technologies. "
[19:23:47] <StephenLynx> which leads to fb which I don't have an account
[19:23:50] <GothAlice> keeger: Generally one keeps the replica set, and adds another replica set as a second shard. This way the data is still redundantly stored.
[19:25:06] <StephenLynx> http://www.quora.com/Where-does-Facebook-use-C++
[19:25:12] <GothAlice> StephenLynx: Have a blog post: https://developers.facebook.com/blog/post/301/
[19:26:43] <keeger> GothAlice, interesting
[19:27:33] <keeger> GothAlice, i ask because i think a replica set will work for what i want, but if the write performance bogs down, i want to be able to convert that set to shards. preferably without adding more servers
[19:28:08] <GothAlice> keeger: Sharding != automatic parallelization and improvement in performance.
[19:28:28] <keeger> GothAlice, i thought one big benefit to sharding was increasing write throughput capability
[19:29:12] <GothAlice> There are things you can do to your data (bad sharding indexes, for example) that remove any benefit of sharding. I.e. if your key determines that the next 1000 records inserted need to go to shard A instead of being balanced among shards.
[19:29:28] <StephenLynx> http://www.quora.com/How-does-Facebook-use-Tornado
[19:29:40] <StephenLynx> from a facebook engineer: "Facebook doesn't use Tornado for anything other than FriendFeed (who built the technology) at the moment."
[19:29:50] <StephenLynx> so yeah, nah
[19:29:55] <StephenLynx> they don't use python
[19:29:57] <StephenLynx> they useC++
[19:30:47] <StephenLynx> as the sources I provided confirm.
[19:31:19] <keeger> i heard that facebook uses magic
[19:33:08] <StephenLynx> first they uses hiphop
[19:33:10] <StephenLynx> used*
[19:33:19] <StephenLynx> now they use this http://en.wikipedia.org/wiki/HipHop_Virtual_Machine
[19:33:50] <GothAlice> Then some other developers came by and used Python on PHP and doubled HHVM's performance. ¬_¬ HippyVM FTW.
[19:34:58] <StephenLynx> I can't find any reference to python on the wikipedia page, do you have any source for that?
[19:35:36] <GothAlice> StephenLynx: http://hippyvm.com/ — That's 'cause it's a separate project.
[19:36:10] <GothAlice> (It's PHP written in Python.)
[19:36:43] <StephenLynx> first: facebook still doesn't use python. that was what you claimed. second: if its twice as fast, why didn't facebook migrated to it?
[19:37:05] <GothAlice> StephenLynx: If ever we meet at a conference, I'll introduce you to Yannick, my Python buddy who works at Facebook doing Python. ;)
[19:37:44] <StephenLynx> facebook develops a lot of stuff, mostly because they merge different companies. it doesn't mean that they use it at facebook.
[19:37:46] <GothAlice> StephenLynx: You'd have to ask them. Likely there's a QA process for that multi-gigabyte executable of theirs that's rather strict.
[19:38:10] <StephenLynx> and its pretty clear by now that they don't use python.
[19:38:57] <StephenLynx> " At present it does not include a reasonable web server integration, so it's not usable for use in production in the current form. " :^)
[19:39:05] <StephenLynx> about hippyvm
[19:45:24] <GothAlice> StephenLynx: Final bit dug out of my open tabs: http://www.slideshare.net/pallotron/python-at-facebook-40192297 and https://www.quora.com/Why-did-Quora-choose-Python-for-its-development (there's irony in one of the links you used itself being powered by Python) Further Google-fu is up to you.
[19:45:54] <GothAlice> Alternatively, get the dictionaries to alter the definition of "obsolete" to better match your usage. ;)
[19:46:17] <StephenLynx> what is the relation of quora using python to the discussing?
[19:48:02] <GothAlice> T'was re: you supplying a "how does facebook use tornado" link… from quora… while having a discussion about Python being obsolete in your eyes.
[19:48:52] <StephenLynx> I don't get it. I never said that quora didn't used python, I said that facebook doesn't.
[19:49:14] <StephenLynx> and I got it from a facebook engineer.
[19:49:37] <StephenLynx> that slides don't have its context because I don't know what did the presenter said along them.
[19:49:59] <StephenLynx> it indicates facebook does have some relation with python, but it doesn't show they use it at the facebook application.
[19:52:40] <StephenLynx> keep in ming my central point is that python is obsolete >for web<
[19:53:11] <GothAlice> Disproved by one of the links you provided. Really, what's your definition of "obsolete"?
[19:53:26] <StephenLynx> which link that I provided disproves that?
[19:54:00] <StephenLynx> I understand something obsolete as being inferior in everywhere to one or more newer technologies.
[19:54:23] <GothAlice> StephenLynx: http://dictionary.reference.com/browse/obsolete
[19:55:32] <StephenLynx> ok, I concede the point that my understanding of what the term actually meant was incorrect.
[19:55:50] <keeger> i think you meant to use...inferior!
[19:56:07] <keeger> which you just did use :)
[19:56:26] <StephenLynx> inferior is relative, because it may be inferior is some aspects but not in all aspects.
[19:56:40] <medmr> python isnt obsolete for web or for any particular domain
[19:57:14] <keeger> well i don't know how you were going to argue it was obsolete, and you just said above that you considered it to be inferior to newer tech
[19:57:42] <medmr> prevalence != superiority
[19:57:46] <GothAlice> Indeed. Also, sorta by definition, a general purpose programming language can do anything another general purpose programming language can do, so then it comes down entirely to needs analysis for any given problem. StephenLynx: Quora built their site on Python, and the link I provided has one of the developers describe the rationale behind that decision. (So still in general use, and not out-of-date.)
[19:58:07] <medmr> and convention isnt superiority either
[19:58:40] <keeger> supriority = nebulous
[19:58:48] <medmr> ya
[19:59:49] <StephenLynx> not only benchmarks show that python is as slow as php, but python bottlenecks with IO operations.
[19:59:52] <StephenLynx> http://programmers.stackexchange.com/questions/102552/should-i-stick-with-or-abandon-python-to-deal-with-concurrency
[20:00:15] <medmr> StephenLynx: way way to shallow of a look to be drawing the conclusions you are
[20:00:28] <medmr> python with gevents is fast, compares to node for concurrency
[20:00:31] <GothAlice> StephenLynx: Weird that Python is "as slow as PHP" yet a PHP interpreter written in PHP is 10x faster than the default runtime. I have to agree with medmr, here.
[20:00:41] <GothAlice> s/written in PHP/written in Python/
[20:01:19] <GothAlice> StephenLynx: You simply do not have the knowledgebase about Python needed to make the broad generalizations you are. (Concurrency is a solved problem in my haus, so, yeah.)
[20:01:54] <GothAlice> Yeesh, and that last SO link is about Django. Django isn't exactly a shining beacon of good software. :/
[20:02:22] <GothAlice> Django + Celery = a good way to kill your project part-way into development.
[20:03:04] <keeger> i have a friend that codes in python, and he says he likes it because it runs everywhere, apparently even on android and iOS
[20:03:25] <StephenLynx> that is true for any interpreted language.
[20:03:36] <StephenLynx> or hybrid
[20:03:38] <GothAlice> keeger: Indeed. Statically linked VMs are a common thing for game scripting, most popular being Lua.
[20:03:43] <medmr> django is what i would call a good attempt and streamlining the development of CMS type apps... but doesnt address the grittier problems of scalability
[20:04:38] <keeger> i hate cms
[20:04:42] <medmr> *at streamlining
[20:04:44] <keeger> so...boring
[20:05:18] <medmr> would you say... it's a bunch of CRUD? :)
[20:05:29] <keeger> is it sad i chuckled? :)
[20:05:31] <GothAlice> CRUD pays my bills. XP
[20:06:31] <keeger> hah
[20:52:45] <medmr> Hey, I have some objects { _id: ObjectId('bababababababa'), things: [] }
[20:52:56] <medmr> about 7 million of these, some have things in the things array
[20:53:10] <medmr> i need to add a things_count field to each document
[20:53:16] <medmr> so that i can index on it
[20:53:27] <medmr> and do quick lookups by the number of things
[20:54:19] <medmr> I'm wondering if anyone has suggestions for a relatively non-invasive way to do that
[20:54:48] <rkgarcia> You need to select all records
[20:54:59] <medmr> i could .each through the whole collection and do an update for each
[20:55:00] <rkgarcia> and update one by one
[20:55:07] <medmr> phooey
[20:55:48] <medmr> i was hoping maybe i could use a mapreduce
[20:55:52] <medmr> to update them all in place
[21:24:50] <daidoji> medmr: here's an easier way if your collection doesn't change very quickly
[21:25:17] <daidoji> use agg framework to get the count you want, $out into new collection, drop old collection, rename new collection to old collection viola!
[21:25:24] <daidoji> but they'll use different obj_ids
[21:25:39] <daidoji> you could use mapReduce to do something similar as well
[21:25:51] <medmr> good point
[23:02:20] <agenteo> hey there, do you guys know of any mongo client with syntax highlighting at least for matching open closed { } (like vim has)?
[23:02:38] <agenteo> a terminal client
[23:02:41] <agenteo> no GUI
[23:03:14] <GothAlice> Funnily enough, http://try.mongodb.org < the web "sample shell" has highlighting…
[23:03:33] <agenteo> GothAlice: :)
[23:04:31] <GothAlice> Alas, http://stackoverflow.com/questions/41207/javascript-interactive-shell-with-completion < doesn't seem to be much in the way of "terminal-native" solutions to your problem.
[23:05:08] <agenteo> how do you guys fiddle with long queries? copy paste from vim/another editor?
[23:05:14] <GothAlice> Rhino IDE's shell seems to support it, as of: http://blog.norrisboyd.com/2009/03/rhino-17-r2-released.html (search for "shell" on this page)
[23:05:25] <agenteo> @GothAlice thanks I’ll check it out, I tried mongo-hacker but no highlight on typing
[23:06:27] <GothAlice> agenteo: I use the ipython enhanced interactive Python shell, which has highlighting, integrated clipboard support, on-paste reformatting, workbooks, parallelism up to and including cloud distribution of tasks/function calls, and integrated performance testing and scientific tools, with pymongo and the MongoEngine ODM.
[23:07:22] <GothAlice> (It has so much it's a 10MB+ package…)
[23:10:44] <GothAlice> Object-literal syntax is JSON-compatible, including (with appropriate imports) extended types like ObjectId, too, which makes copying and pasting chunks of examples really easy.
[23:12:00] <fewknow> blamo
[23:12:09] <keeger> but i thought python was obsolete? :P
[23:12:17] <GothAlice> >:P
[23:12:26] <fewknow> python obsolete? who is this kid
[23:12:52] <fewknow> BTW...we are benchmarking mongodb 3.0 vs aerospike......
[23:12:59] <keeger> just a little somthing from earlier
[23:12:59] <GothAlice> Heh, fewknow, it's a reference to an earlier discussion.
[23:13:09] <GothAlice> fewknow: Ooh, how goes it?
[23:14:30] <keeger> and work is done. time for a beer
[23:14:48] <keeger> is 3.0 out, or is it like in RC status
[23:14:57] <GothAlice> keeger: That's not a half-bad idea. 3.0 is totally out.
[23:15:06] <fewknow> out today
[23:15:06] <GothAlice> Topic is up-to-date and everything this time. :D
[23:15:14] <agenteo> @GothAlice look “Have you tried what we have in current versions – 2.0.3 and 2.1.0? It highlights the matching brace as you move the cursor left and right. It highlights only the matching one when the cursor is over a brace/bracket/parenthesis “
[23:15:16] <agenteo> https://jira.mongodb.org/browse/SERVER-2767
[23:15:29] <fewknow> GothAlice: it looks like it could beat out Aero
[23:16:39] <GothAlice> agenteo: The native mongo shell does do matching bracket highlighting, yes.
[23:17:28] <keeger> GothAlice, then i'm going to go see the feature set 3.0 has
[23:17:29] <keeger> :)
[23:17:31] <agenteo> are you on OSX or Linux? I am on OSX 10.10.1, in tmux and I see no matching brakets
[23:17:31] <GothAlice> fewknow: I'm a "show me the numbers" kinda person. :3 I'd love to know if/when/where you will publish your results.
[23:17:38] <keeger> but my stupid cat is on the mouse
[23:17:46] <GothAlice> keeger: Well, I'll get to finally deprecate my own compression implementation. ¬_¬
[23:17:51] <fewknow> GothAlice: will let you know....not my team...but I want to see the numbers too
[23:18:38] <GothAlice> agenteo: I'm on OSX using brew-installed MongoDB 2.6.7, SSL enabled using stock Terminal.app, no screen multiplexer.
[23:19:07] <GothAlice> (Under a heavily customized zsh configuration: https://github.com/amcgregor/snippits/tree/zsh#readme that also includes syntax highlighting and friends.)
[23:21:13] <agenteo> strange… same brew installed here, tested on stock terminal and no matching. We’re talking about a subtle bolded font when you type or hover a } right? Can anybody else confirm this matching parenthesis?
[23:21:22] <agenteo> that’s exactly what I am after
[23:23:14] <GothAlice> agenteo: http://cl.ly/image/043c1l36391S
[23:24:04] <agenteo> :\
[23:24:20] <GothAlice> Boost is also selected as an option on my brew.
[23:28:35] <GothAlice> Terminal advertising as xterm-256color…
[23:28:44] <GothAlice> UTF-8 encoding…
[23:29:29] <keeger> hmm, tigerwire is interesting
[23:29:37] <GothAlice> Indeed.
[23:30:12] <agenteo> that
[23:30:17] <agenteo> is my config too…
[23:31:01] <agenteo> I’ll check on a VM
[23:35:56] <keeger> i notice that Go is left off the driver compatibility charts
[23:36:03] <keeger> yet the tools for 3.0 were written in Go?
[23:38:14] <GothAlice> keeger: Alas, I lack knowledge on that particular subject. Go isn't a focus of mine at the moment.
[23:38:43] <GothAlice> I'm sure driver compatibility deficiencies will be addressed rapidly now that 3.0.0 has been released.
[23:39:23] <GothAlice> (Votes on JIRA tickets do help prioritize where the effort goes… I hope. ;)
[23:39:25] <keeger> i think it's a doc thing heh, pretty sure if ops manager was written in Go and updated for 3.0...the driver works
[23:41:27] <keeger> thinking it's just s a doc issue. anyways, it's not clear to me, but is wiredtiger supposed to be faster than the mmap engine for high writes?
[23:42:14] <GothAlice> In cases where the write lock was a concern formerly, yet document-level locking is acceptable, certainly.
[23:42:40] <keeger> hmm, write lock as a concern..
[23:42:46] <GothAlice> Certain other patterns of use would seem to indicate degraded performance. I'll dig through the chat log for today to dig up the relevant section for you.
[23:43:29] <keeger> thx
[23:43:54] <GothAlice> keeger: http://irclogger.com/.mongodb/2015-03-03#1425404881 starts here
[23:44:47] <GothAlice> Not write-related, but in general a performance difference.
[23:45:52] <keeger> hah, jamesharrison, love his nick
[23:46:20] <JamesHarrison> hm?
[23:46:44] <GothAlice> It's a good nick. A solid nick. A full set of personal pronouns, even.
[23:47:06] <keeger> always good to see Steelers fans on the net
[23:47:22] <JamesHarrison> keeger: ah, right. yes, that confusion is very helpful :)
[23:47:34] <GothAlice> JamesHarrison: Happens a lot? XP
[23:47:44] <JamesHarrison> every time there's a bloody game on my twitter melts
[23:48:18] <JamesHarrison> (I am also @JamesHarrison there, as opposed to the footballer slumming it at @jharrison9292 or something)
[23:48:30] <JamesHarrison> this is lost on many american football fans.
[23:49:14] <GothAlice> Aaaah. Sounds like the poor fellow with the @rogers handle… http://www.citynews.ca/2013/10/10/man-with-rogers-twitter-handle-bombarded-during-wireless-outage/
[23:49:40] <JamesHarrison> oof, that's gotta suck
[23:50:45] <keeger> heh
[23:50:54] <keeger> well now i feel sad
[23:50:59] <keeger> a perfect handle..and not even a fan
[23:51:05] <JamesHarrison> :)
[23:51:42] <JamesHarrison> if I want to see people running around a field chucking a ball about, I'd rather watch rugby I'm 9afraid :)
[23:51:47] <keeger> GothAlice, i dont see a lot of stuff in there, basically just he had a non-optimal setup and the engine showed a giant wart
[23:52:23] <keeger> JamesHarrison, i was watching rugby the other night. looked like a lot of hugging to me
[23:52:23] <GothAlice> Aye. Something that one should not have done before demonstrated dramatically worse performance in a newer version.
[23:52:55] <keeger> GothAlice, well for my purposes, i have a high write operation setup
[23:53:12] <keeger> i can start with WT and if it sucks, switch to MMAP
[23:53:31] <keeger> that's the beauty of engines right?
[23:53:46] <GothAlice> Indeed. Though some datasets are… rather more painful to migrate than others.
[23:54:24] <keeger> well i have to build my app with enough features that i can do load testing
[23:57:40] <keeger> found a blog that shows release for go driver for mongo (mgo) was in 2015-01-24 and is to add support for 3.0
[23:57:52] <GothAlice> Great success.
[23:58:33] <keeger> i expect an update to be coming soon, releases are every 3 months from what i can see
[23:58:40] <keeger> which is good.
[23:59:31] <keeger> now i just need to finish designing my schema to be more mongo smart
[23:59:37] <keeger> it's kinda meh atm