PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 18th of January, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:02:24] <hadees> sirious: yeah but I don't think that is all there is. I think actually if limit it by everything less than created_at it would work most of the time. The problem is if multiple comments are created at the same time.
[00:12:10] <sweatn> hello
[02:12:59] <sam___> keep getting this - assertion: 9997 auth failed: { errmsg: "auth fails", ok: 0.0 } - when i try mongodump even with the right credential, any idea?
[02:25:13] <aep> hm if i query for arbitrary garbage like {'abcdef' : 1234' } which is not contained in any document, i get all documents. is that supposed to work this way?
[05:21:42] <jwilliams_> does mongodb's mapreduce perform full table (collection) scan? or if within map function, using indexed field will let map function perform more effectively?
[05:47:06] <timah> hey guys… looking for some opinions regarding the best (reliable, efficient, fast, etc.) method for mass transforming about 30 million documents from a single collection to many (about 40) collections… i wrote a very slick mongodb shell script that serves in such capacity, however i can only seem to get between 2-5k documents read/transformed/written every few seconds or so.
[05:51:05] <vr_> it appears that GridFS doesn't support TTL.
[05:51:25] <vr_> is that correct?
[07:22:48] <samurai2> hi there, anyone use mongo java driver and get this error : Jan 18, 2013 3:19:34 PM com.mongodb.DBPortPool gotError
[07:22:48] <samurai2> WARNING: emptying DBPortPool to /127.0.0.1:27017 b/c of error
[07:22:48] <samurai2> java.lang.NullPointerException
[07:22:48] <samurai2> at com.mongodb.DBRef.<init>(DBRef.java:37)
[07:22:48] <samurai2> at com.mongodb.DefaultDBCallback.objectDone(DefaultDBCallback.java:72)
[07:22:49] <samurai2> at org.bson.BasicBSONDecoder.decodeElement(BasicBSONDecoder.java:207)
[07:22:51] <samurai2> at org.bson.BasicBSONDecoder.decodeElement(BasicBSONDecoder.java:196)
[07:22:53] <samurai2> at org.bson.BasicBSONDecoder.decodeElement(BasicBSONDecoder.java:206)
[07:22:55] <samurai2> at org.bson.BasicBSONDecoder._decode(BasicBSONDecoder.java:79)
[07:22:57] <samurai2> at org.bson.BasicBSONDecoder.decode(BasicBSONDecoder.java:57)
[07:22:59] <samurai2> at com.mongodb.DefaultDBDecoder.decode(DefaultDBDecoder.java:56)
[07:23:01] <samurai2> at com.mongodb.Response.<init>(Response.java:83)
[07:23:03] <samurai2> at com.mongodb.DBPort.go(DBPort.java:124)
[07:23:05] <samurai2> at com.mongodb.DBPort.call(DBPort.java:74)
[07:23:07] <samurai2> at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:275)
[07:23:09] <samurai2> at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:289)
[07:23:11] <samurai2> at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:274)
[07:23:13] <samurai2> at com.mongodb.DBCursor._check(DBCursor.java:368)
[07:23:17] <samurai2> at com.mongodb.DBCursor._hasNext(DBCursor.java:459)
[07:23:19] <samurai2> at com.mongodb.DBCursor._fill(DBCursor.java:518)
[07:23:21] <samurai2> at com.mongodb.DBCursor.toArray(DBCursor.java:553)
[07:23:23] <samurai2> at com.mongodb.DBCursor.toArray(DBCursor.java:542)?
[07:23:25] <samurai2> thanks :)
[07:58:49] <gdoteof> if i do db.gaming_session.find({end:null}); i get some things listed
[07:59:12] <gdoteof> i want to make all of those end:null instead be end:ISODATE(...)
[07:59:17] <gdoteof> where ISODATE is right now
[07:59:24] <gdoteof> a manual rake
[08:40:28] <[AD]Turbo> hola
[09:07:52] <Guest_1448> what would be the best way to to about creating a graph out of thousand of entries in a collection with a timestamp value?
[09:07:59] <Guest_1448> # of items per day/hour or something
[09:15:59] <NodeX> use aggregation framework
[09:19:24] <Guest_1448> well
[09:19:26] <Guest_1448> how?
[09:19:47] <Guest_1448> all the timestamp values are stored as millisecond values
[09:20:03] <Guest_1448> how would I group them by hour/day or something?
[09:20:25] <Guest_1448> using reduce functions?
[09:43:01] <remonvv> Ahoy fellow sailors on the seas of MongoDB
[09:44:12] <drjeats> arrr
[09:53:15] <Guest_1448> is there a better/faster way to do this: http://stackoverflow.com/a/9388189 with the new aggregation framework?
[09:56:01] <kali> Guest_1448: i strongly advise you to store dates as dates instead of strings
[09:56:33] <kali> Guest_1448: it will be much more compact, and many problems will find much simple solutions
[09:56:35] <Guest_1448> they're stored as millisecond values as integer
[09:56:39] <Guest_1448> s
[09:57:37] <kali> Guest_1448: the aggregation framework can make simple operation on dates, so my guess is you can achieve this kind of grouping
[09:57:44] <kali> Guest_1448: but you need to use the proper type
[10:00:06] <Guest_1448> I can switch
[10:00:32] <Guest_1448> I can batch convert those millisecond values to native mongo dates
[10:00:46] <Guest_1448> but then how would I group like that?
[10:00:53] <Guest_1448> like in the stackoverflow q/a
[10:14:59] <Guest_1448> ?
[10:18:17] <NodeX> ?
[10:19:27] <Guest_1448> "is there a better/faster way to do this: http://stackoverflow.com/a/9388189 with the new aggregation framework?"
[10:35:18] <kali> Guest_1448: well, it would be pretty standard stuff with the AF: you can extract components of a datetime with http://docs.mongodb.org/manual/reference/aggregation/#date-operators, then group
[10:38:43] <Guest_1448> oh
[10:39:00] <Guest_1448> so I would use those on $project then group by the projected fields?
[10:42:16] <xcat> Derick: Are there PHP source files that can be loaded into Eclipse to provide intellisense for PHP Mongo projects?
[10:45:27] <kali> Guest_1448: yeah, i think that should work... it's quite easy to try
[10:47:22] <Guest_1448> yeah, gotta switch the millisecond field values to date first. thanks
[10:48:20] <Guest_1448> um can I do a batch update with .update({}) to convert all those values to native dates?
[10:49:28] <Guest_1448> they were stored using javascript's Date.now()
[10:52:03] <Derick> xcat: not that I'm aware off
[10:52:34] <NodeX> Dreamweaver ftw
[10:59:18] <NodeX> UK Grinds to a halt because 4 inches of snow arrives, how I love this country
[11:17:12] <Gargoyle> NodeX: pic.twitter.com/d4EtECls
[11:38:38] <xcat> If I have a logging collection that expires via a TTL date field and a type field which I want to filter messages by, how many indexes would I need and on which fields?
[12:12:05] <NodeX> Gargoyle : nice!!
[12:12:30] <NodeX> xcat : one index for the TTL and one for the "type" field
[12:15:28] <xcat> NodeX: if I also want to sort by insert order (either ID or date) do I need to create a compound index on one of those fields?
[12:16:35] <NodeX> you get a free index on insert order ... order with _id
[12:16:55] <NodeX> _id -1 for desc (last first) or no sort for asc
[12:17:25] <NodeX> there is no need to order on a date in my opinion
[13:36:23] <NodeX> quick question regarding the _id index.... if I do somehting like this ... {_id:ObjectId("1234"),uid:"5678"} with an index on UID is this as efficient as just looking up on _id ?
[13:37:34] <NodeX> or do I need a secondary compound index on _id,uid
[13:38:33] <vlad-paiu> Hello
[13:38:38] <jmar777> NodeX: you shouldn't need a compound index, assuming your query is only for the `uid`
[13:38:44] <kali> NodeX: it's as efficient when it find something, marginally less efficient when it does not match
[13:38:46] <jmar777> or against the uid, that is
[13:38:55] <vlad-paiu> Quick question : any way I can run a find and modify query with the C driver API ?
[13:39:02] <kali> NodeX: because it will fetch the document and scan it for nothing
[13:39:30] <jmar777> kali: even if the index is on `uid`?
[13:39:46] <vlad-paiu> can't see any explicit method.. like mongo_find_modify or something like that :)
[13:39:53] <kali> jmar777: the index on _id is there anyway
[13:39:57] <NodeX> it's not just for the "uid"
[13:40:20] <jmar777> kali: right, but his question indicated there would be one on uid as well
[13:40:33] <NodeX> thanks kali, it was not somehting I have thought about before, just wondered if anyone else had
[13:40:43] <NodeX> jmar777 : I think you dont understand what I am after
[13:40:53] <jmar777> NodeX: i'm starting to think so as well :)
[13:40:57] <NodeX> ;)
[13:41:13] <kali> good, because i was starting to feel confused :)
[13:41:17] <NodeX> I'll do the check in my application code just incase
[13:41:27] <jmar777> NodeX: ooooh... that object was your query, not an actual document, huh....
[13:41:28] <NodeX> it's only a findOne() so no biggy
[13:41:34] <NodeX> err no lol
[13:41:57] <NodeX> my query was this .. findOne({_id:ObjectId("123"),'uid':456});
[13:42:13] <NodeX> and wheter it was as efficient as findOne({_id:ObjectId("123")});
[13:42:26] <NodeX> whether *
[13:42:45] <jmar777> NodeX: right... i think i'm tracking you now. and ya... what kali said :)
[13:42:51] <NodeX> I just wondered if the query planner would see my index on uid and merge it with the _id
[13:43:26] <NodeX> but it doesn't matter now anyway lol, easier to make the comparison in my code
[14:34:27] <NodeX> tumbleweeds
[15:12:23] <JoeyJoeJo> I need to performance test how long it takes to insert my data (tens of millions of documents). What's a good way of doing that? I was thinking about writing a python script and just doing finish time - start time. Does mongo have anything built in to handle that?
[15:14:29] <NodeX> you need to script iot
[15:14:30] <NodeX> it *
[15:15:16] <JoeyJoeJo> So then comparing the time before and after my insert loop runs is a good idea?
[15:16:10] <NodeX> Personaly i would do it in chunks but yes
[15:17:06] <JoeyJoeJo> How would you do it with chunks?
[15:20:26] <NodeX> err split your data into chunks ...
[15:20:37] <NodeX> http://www.itproportal.com/2013/01/18/google-ceo-larry-page-taunts-apple-hows-that-thermonuclear-war-against-android-going/ :D
[15:25:39] <xcat> What difference would opening your mouth make on IRC?
[15:25:52] <xcat> >that feel when scrolled up
[15:26:27] <NodeX> eh~?
[15:31:36] <ehershey> wow
[15:44:02] <timah> what's the fastest method for transforming data.
[15:44:03] <timah> ?
[15:44:34] <timah> lots of data.
[15:45:24] <NodeX> define "Transforming"
[16:21:08] <timah> NodeX: sorry for the delay… here's a brief example.
[16:22:56] <timah> the guy i just replaced had 1 big 30+ million document collection.
[16:27:32] <NodeX> ok great
[16:27:41] <NodeX> and then what?
[16:28:20] <timah> that looked something like this: { _id: ObjectId('…'), event: 'someLongEventKey', properties: { publisher_hash: <MD5>, merchant_hash: <MD5>, region_hash: <MD5>, longitude: '-45.93939', latitude: 'anotherStringEek', user_hash: <MD5>, time: <SECONDS_TIMESTAMP> } }
[16:28:30] <timah> so… super heavy.
[16:29:12] <timah> even worse… publisher > < region are 1:1.
[16:29:20] <timah> so region is unnecessary.
[16:29:52] <timah> publisher to merchant is 1:m.
[16:30:21] <timah> longitude and latitude not geo indexable in their current state, not to mention strings.
[16:30:48] <timah> time? really? and only to the second? really? ugh...
[16:31:21] <NodeX> lat/long should just store as an array
[16:31:27] <timah> exactly. lol.
[16:31:34] <timah> and should be the first field in the doc.
[16:31:38] <timah> so here is my format.
[16:31:41] <timah> get ready.
[16:31:56] <timah> super simple.
[16:32:31] <timah> every md5 gets replaced with its corresponding mysql pkid.
[16:33:47] <timah> so publisher gets moved out of the schema altogether and its hashes mysql pkid value becomes part of the collection name… so… 1 collection per publisher… e.g. p.25.e
[16:34:07] <timah> p = publisher, 25 = id, e = event.
[16:34:21] <timah> the docs now look like this.
[16:35:15] <NodeX> why are you splitting the collection?
[16:35:47] <NodeX> save yourself the hassle and shard it
[16:37:19] <timah> { _id: { m: <int>, o: <oid> }, l: [ <int>, <int> ], p: <int>, k: <int>, u: <int> }
[16:38:09] <timah> so… i've wrapped the merchant id up into the _id.
[16:38:26] <timah> and aside from the geo index nothing else needs indexed.
[16:38:57] <timah> we aren't ready to shard yet. and we only have 30 million docs.
[16:39:28] <timah> and it's completely logical to split based on publisher.
[16:39:59] <NodeX> not sure you can use an object as an _id
[16:40:31] <NodeX> never tried myself but I am sure it can't be anything but a string, oid or int
[16:40:49] <timah> http://mongotips.com/b/another-objectid-trick/
[16:41:45] <NodeX> I stand corrected
[16:42:20] <timah> which is tricking awesome considering i'm not eating storage for a separate index.
[16:42:50] <timah> and it wraps my logical seek by merchant > time into the autoindexid.
[16:43:09] <timah> so i just format it upon creation and mongo does the rest.
[16:44:04] <timah> then each publisher event collection (p.25.e) has a corresponding count collection (p.25.c).
[16:45:30] <timah> { _id: { m: <int>, d: <timestamp> }, e1: 214, e2: 2343, e3: 223, etc: 234 }
[16:45:32] <NodeX> sounds like you have the transformation figured out
[16:45:44] <timah> i sure do… and it all works great.
[16:45:46] <timah> it just takes forever.
[16:46:01] <timah> should i paste the mongo script?
[16:46:06] <timah> oops.
[16:46:09] <timah> pastie
[16:46:11] <timah> lol.
[16:46:27] <NodeX> Not sure what you're asking for help with now tbh
[16:46:46] <NodeX> looks more like you wanted to show us how smart you are ;)
[16:46:53] <timah> well… i'm looking for a performance gain in the transformation process.
[16:47:04] <timah> haha… no, not at all.
[16:47:09] <NodeX> you're doing this transform with javascript?
[16:47:13] <timah> yes.
[16:47:26] <NodeX> then perhaps dont use that LOL
[16:47:52] <timah> yeah… and that's exactly why i'm here… figured i'd drop a line before converting it to pymongo.
[16:48:17] <NodeX> probably way faster in python or somehting
[16:48:23] <timah> it's taking right around 30 minutes for every 3.5 million documents.
[16:48:29] <NodeX> yuck
[16:48:43] <NodeX> even in PHP that would only take around 15 mins for all 30M
[16:48:44] <timah> yeah… ok… glad to know i'm not going crazy.
[16:48:53] <NodeX> if even that long
[16:49:12] <timah> yeah… ok… mind taking a quick peak and what i'm doing just to see if i'm missing something logically?
[16:49:16] <timah> cursor handling, etc...
[16:49:39] <NodeX> you should try in batches too
[16:49:54] <NodeX> sometimes performance degrades after alot of inserts
[16:50:32] <timah> i am… you'll see my logic… it's setup so i can try different batch sizes.
[16:50:48] <timah> http://pastie.org/5720238
[16:55:39] <timah> i mean… i was even going to go as far as using mongoexport with --query (per publisher) and --fields (only the fields i need) and dumping each publisher in its own file during that export… then using something super simple like php fgets for buffer read on large files and have php just swap all the values out and basically just transform the exports into imports.
[16:55:48] <timah> then just mongoimport those.
[16:56:58] <timah> but if you think that having php (or anything else, really) do the work will speed it up a bit… it surely wouldn't take too long to port it over.
[16:58:32] <rcmachad_> clear
[16:59:10] <timah> rcmachad_: haha! sorry… i need more coffee too.
[17:00:06] <rcmachad_> timah: no problem, just to mark when I stop reading :)
[17:00:33] <rcmachad_> timah: (it was a command mispelled)
[17:05:33] <timah> NodeX: did i loose you? or has my lack of daily caffeinec intake misled me? :P
[17:08:51] <timah> alright… gotta actually leave my house and go to a stupid office. bbiab.
[17:12:08] <JoeyJoeJo> I've inserted a few million documents to my new DB. How can I make sure that sharding is working properly?
[17:12:39] <timah> exit
[17:12:42] <timah> meh...
[17:21:42] <webber_> Dear experts, where to find notes on MongoDB disk layout and storage?
[17:25:32] <ehershey> http://docs.mongodb.org/manual/administration/production-notes/#disk-and-storage-systems
[17:25:35] <ehershey> some goodies there
[17:26:31] <webber_> :) Ok, cool ... anything else beyond this. Any dev blogs to follow... must read wiki... or the like?
[17:27:18] <webber_> Oh , wait. That's on using and the like. I was wondering how MongoDB stores its data on disk.
[17:34:55] <ehershey> hmm
[17:36:02] <ehershey> http://docs.mongodb.org/manual/faq/fundamentals/#are-writes-written-to-disk-immediately-or-lazily
[17:36:08] <ehershey> http://docs.mongodb.org/manual/faq/developers/#why-are-mongodb-s-data-files-so-large
[17:37:29] <ehershey> There's a lot of other good info in the faq's
[17:37:54] <ehershey> http://www.slideshare.net/mdirolf/inside-mongodb-the-internals-of-an-opensource-database
[17:38:18] <ehershey> and it's all bson on disk
[17:38:18] <ehershey> http://bsonspec.org/
[17:40:54] <webber_> BSON , yes, that I know... presentation *looking*
[17:45:01] <ehershey> http://docs.mongodb.org/manual/faq/storage/#faq-disk-size
[17:45:03] <ehershey> here's some more info
[17:45:23] <ehershey> oh in fact that whole pae
[17:45:24] <ehershey> page
[17:45:29] <ehershey> is the motherlode
[17:47:15] <webber_> :) So, basically: read the fine manual
[17:47:21] <webber_> Thanks
[17:50:42] <JoeyJoeJo> Based on this output from the sh.status() command, can anyone tell if my database is using all four of my servers? It looks to me like it's only using one server. http://pastebin.com/zKK4Rvy2
[17:51:44] <kali> for now everything is in one single shard, there is not enough data
[17:52:48] <JoeyJoeJo> How much is enough? I'm running my insertion script now and I'm up to 1.5 million documents out of 180+ million
[17:55:02] <JoeyJoeJo> Is there any way to force mongo to always write to all shards? Right now I seem to be inserting about 5000 documents per second on the one server. Couldn't I quadruple that if all servers were being used?
[17:56:27] <kali> JoeyJoeJo: you know, this is documented extensively
[18:52:14] <MatheusOl> (because _id is unique, only)
[19:10:33] <sweatn__> hello
[19:10:58] <sweatn__> does anyone use mongoose or the node native driver?
[19:46:15] <sweatn__> hello
[20:27:17] <owen1> is there a way to see logs of all my replica set?
[20:31:20] <w3pm> why are there different functions to connect - connect() and rs_connect() in the Erlang driver for example.. i thought replica sets were supposed to be transparent to the app?
[21:26:02] <SEJeff_work> I've got a record that has a key named "roles" with a value like ["super_user", "secret-sauce"]
[21:26:24] <SEJeff_work> How can I do a query to return all users with a "super_user" role?
[21:26:56] <SEJeff_work> I thought something like this might work, but it doesn't: db.users.find({"roles": {"$in": ["super_users"]}})
[21:33:19] <SEJeff_work> This is what I was after: db.users.find({"roles": {"$in": ["super-users"]}}, {login: 1, _id: 0})
[22:24:42] <tjadc> Shoh!
[22:24:57] <tjadc> It's been a while since that I have used MongoDB
[22:25:05] <tjadc> Hope not too much has changed :P
[22:25:18] <tjadc> Except improvements!
[22:25:25] <tjadc> Improvements are always welcome
[22:25:31] <tjadc> :P
[22:25:59] <tjadc> Hmmm, been about 1.5 years now
[22:45:38] <Ironhand> hello; is anyone able to give me a rough idea of the current status of MongoDB on ARM CPU's? the information I've found suggests some people have done some serious work on it, but with very few recent signs of activity
[22:47:13] <tjadc> Ironhand: I am unable to give you personal experience info
[22:47:21] <tjadc> but if you'd like I can do a cross search for you
[22:47:30] <tjadc> - do the search and see what I find
[22:47:41] <tjadc> Don't know how useful it'd be :P
[22:49:29] <Ironhand> well, any information would be useful, just hope you won't waste too much time on whatever a cross search would actually amount to
[22:52:00] <Ironhand> after a lot of consideration I decided I wanted to migrate a personal Python-based project from SQL to MongoDB, read up on various libraries, got a taste for the whole thing, then when I finally decided to actually install stuff it turned out running MongoDB on my QNAP NAS running Debian wasn't quite that easy...
[22:52:43] <Ironhand> I'm kind of hoping the popularity of the Raspberry Pi might have sparked some new interest in MongoDB on non-x86
[23:02:37] <owen1> is there a way to see logs of all my replica set?
[23:08:20] <tjadc> Ironhand: Ah ok
[23:08:34] <tjadc> Yeah I am somewhat bored and am looking for something interesting to do :P
[23:08:45] <ehershey> lucky you!
[23:08:51] <ehershey> ;)
[23:08:51] <tjadc> I have ordered my RPi so it would be worth looking into :)
[23:08:57] <tjadc> :P
[23:10:21] <ehershey> how long do they take to ship nowadays?
[23:10:37] <midinerd> Hello
[23:10:46] <tjadc> Ironhand: and by cross search I just mean searching it up as a second person, I probably search differently and may stumble upon something you haven't
[23:11:02] <tjadc> ehershey: so far have waited 1.5 weeks
[23:11:06] <tjadc> I am in South Africa
[23:11:13] <midinerd> I'm looking at http://mongoosejs.com/docs/index.html <--- where it starts with //getting-started.js (where exactly is this taking place, a javascript file located <somewhere>, within npm, in mongoose?)
[23:11:14] <tjadc> They said 3 weeks
[23:13:33] <ehershey> midinerd: that's in a node.js script
[23:13:40] <ehershey> called something like getting-started.js
[23:13:45] <ehershey> it can be located anywhere
[23:13:56] <midinerd> ehershey: Thanks! So I include node.js from somewhere and then continue on with javascript documentation as they've got shown there?
[23:14:05] <ehershey> I wish I'd ordered a raspberry pi a year ago
[23:14:59] <tjadc> Ironhand: You've obviously taken a look @ skrabban/mongo-nonx86 - what are your thoughts around it :?
[23:15:03] <ehershey> midinerd: you might want to step through some basic node.js stuff before diving into mongoose imho
[23:15:26] <tjadc> ehershey: well, if you ordered recently you get the newly upgraded pi :)
[23:15:58] <ehershey> oooh
[23:16:05] <midinerd> I don't want to... but I will.
[23:16:30] <midinerd> ehershey: btw you never said "yes" or "no"
[23:18:27] <w3pm> hey guys, why is there rs_connect and connect? my understanding was replica sets would be invisible to the app
[23:18:40] <w3pm> but it seems like i need to know whether i have a replica set or not
[23:20:35] <midinerd> mongoosejs answered my question foiist
[23:20:36] <midinerd> [17:18:30] <aheckmann> midinerd: once you have node installed you can execute javascript files as shown in getting-started: node someScript.js
[23:20:36] <midinerd> [17:18:50] <aheckmann> midinerd: make sure mongo is running as well
[23:28:03] <ehershey> midinerd: yes or no to which?
[23:29:00] <ehershey> w3pm: I think that the replica set layout can be somewhat transparent to the app but the app needs to know that it's connecting to a replica set vs a single instance
[23:29:57] <w3pm> :\ okay..
[23:30:20] <ehershey> for example what if you were building an administrative tool that needed to connect to an individual instance that is also a member of a replica set
[23:30:32] <ehershey> you would need a way to specify a single instance and know for sure you are connecting to that instance
[23:30:41] <ehershey> vs the rs
[23:31:06] <ehershey> can I ask what's making you worry about the two methods?
[23:31:19] <w3pm> well i have to change my app code if i change the db deployment
[23:31:24] <w3pm> which somewhat defeats the point of it being 'transparent'
[23:31:26] <ehershey> is it that you don't have any secondaries yet and you don't want to have to chane your app code when you get some?
[23:31:29] <ehershey> ah
[23:31:31] <w3pm> right
[23:31:40] <ehershey> s/chane/change/
[23:31:54] <ehershey> I see
[23:32:22] <ehershey> one thing I can think of would be to use a connection string that is easy to change without modifying your code
[23:33:20] <ehershey> I don't know the erlang driver very well
[23:33:41] <w3pm> yeah i could put something as a part of the app config i guess
[23:33:54] <w3pm> well your example of the specific instance makes sense, thanks
[23:34:49] <ehershey> no problem
[23:53:21] <ehershey> I keep thinking
[23:53:41] <ehershey> I should write a utility to pull data from X into mongodb
[23:53:54] <ehershey> but mongoimport takes everything I've thrown at it so far
[23:54:09] <ehershey> like gmail header json
[23:54:14] <ehershey> csv google latitude data