[00:04:15] <fxmulder> joannac: they're all running 3.0.4 now
[00:04:18] <joannac> fxmulder: oh, yeah. google for "mongodb powerof2sizes"
[00:05:49] <fxmulder> oh yeah I don't want that, these documents will never change
[01:27:25] <pyCasso> I have a discussion at work regards to using mongodb vs a rest api
[01:28:16] <pyCasso> my understanding of mongo is limited so I need to know what are the gains from using mongodb oppose to an api call the returns json objects?
[01:29:23] <pyCasso> folks at work argue that it runs on a different server environment oppose to our native server environment
[01:30:31] <cheeser> none of that made any sense to me.
[03:30:22] <Pak> (the latency you see on the webpageis caused by the db)
[03:31:01] <Pak> you can compare it with http://quote.machinu.net/1950 - you can see that this one is noticibly faster (only queries with .find({year: 1950}))
[03:31:16] <Pak> whereas the first one is .find({})
[03:31:26] <Pak> So my question is - what am I doing wrong here?
[03:37:19] <cheeser> "get random document" isn't something that mongodb is optimized for
[03:37:44] <Jonno_FTW> is wired tiger the default engine in mongo 3.0?
[12:13:43] <talbott> [u'AutoReconnect: [Errno 110] Connection timed out', u' File "flask/app.py", line 1817, in wsgi_app', u' File "flask/app.py", line 1477, in full_dispatch_request', u' File "newrelic/hooks/framework_flask.py", line 98, in _nr_wrapper_Flask_handle_exception_', u' File "flask/app.py", line 1381, in handle_user_exception', u' File "flask/app.py", line 1475, in full_dispatch_request', u' File "flask/app.py", line 1461, in dispatch_request', u
[12:13:43] <talbott> ' File "newrelic/hooks/framework_flask.py", line 40, in _nr_wrapper_handler_', u' File "/location-service/app.py", line 49, in geocode', u' File "interactors/location_interactor.py", line 9, in geocode', u' File "services/location_caching_service.py", line 18, in get_location', u' File "mongoengine/queryset/base.py", line 309, in first', u' File "mongoengine/queryset/base.py", line 160, in __getitem__', u' File "pymongo/cursor.py", line
[12:13:44] <talbott> 595, in __getitem__', u' File "pymongo/cursor.py", line 1076, in next', u' File "pymongo/cursor.py", line 1020, in _refresh', u' File "pymongo/cursor.py", line 933, in __send_message', u' File "pymongo/mongo_client.py", line 1217, in _send_message_with_response']
[12:13:53] <talbott> (whoops sorry was meant to be a snippet) :(
[12:16:19] <cheeser> d-snp: there isn't a native operation that says "give me a random value"
[12:19:54] <talbott> is it normal to have connection timeout issues after just a few minutes?
[12:20:40] <deathanchor> talbott: open cursor to mongodb defaults to 10 minute timeout I believe
[12:26:12] <deathanchor> if you turn off the timeout, you need to ensure your program closes/exhausts the cursor before exiting/dying
[12:26:38] <deathanchor> depends what your program does
[12:27:17] <deathanchor> perhaps you are reusing the same cursor? you can call something like cursor.exhaust() in pymongo to exhaust it. or cursor.close() to kill it off.
[12:28:04] <triplefoxxy> Hunting down an issue where all the RAM is allocated and system grinds to a halt. I see WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always' could this be connected in anyway?
[14:05:29] <cheeser> and http://docs.mongodb.org/manual/reference/method/db.currentOp/#db.currentOp
[14:05:44] <StephenLynx> from what I heard, the first developed driver was the java one. I wouldn't be surprised if its the most complete out there.
[14:06:35] <cheeser> it was. the c# driver i know is pretty complete and offers some features the java driver doesn't. (though i'm working on one of those gaps)
[14:07:34] <deathanchor> cheeser: yeah, I use those manually, but I was wondering if anyone wrote code for checking currentOp and killing it if it met certain conditions
[14:11:16] <greyTEO> I havent found much the java driver doesnt support.
[14:24:33] <fxmulder> so I had been running cleanupOrphaned on one of my replica sets, the server it was running on crashed and now when I try to run cleanupOrphaned again I'm getting "server is not part of a sharded cluster or the sharding metadata is not yet initialized."
[15:03:14] <hashpuppy> i need to take down one of my mongodb servers for maintenance. i have m1, m2, and arbiter. i'm taking down m1 (master). what's the best way to do this? just stop the instance?
[15:20:57] <synergy_> Are object ids unique over a whole database, or only in a document's collection?
[15:21:52] <cheeser> well, ObjectId is unique globally. but _id values are only required to be unique per collection
[15:23:34] <fxmulder> so what does "server is not part of a sharded cluster or the sharding metadata is not yet initialized." mean if this replica set has a sharded collection?
[15:24:48] <synergy_> I'm confused. ObjectId is unique globally, but _id values aren't necessarily?
[15:25:18] <cheeser> well, _id can be anything so long as it's unique inside that collection
[15:29:45] <synergy_> But that changes if the _id used is an instance of ObjectId?
[15:38:09] <fxmulder> so db.runCommand( {"shardingState": 1} ) on this replica set says that it is not a shard member while the other replica set still shows that it is a shard member
[15:38:36] <fxmulder> is it safe to enable sharding on the database again in the replica set that doesn't think it is a shard member?
[15:43:48] <deathanchor> fxmulder: did you start it with shardsvr option?
[15:49:20] <fxmulder> sh.collection.stats() still shows both replica sets as shard members
[16:35:38] <d-snp> hi, in what circumstance could I get a duplicate key error in an update query?
[16:35:58] <d-snp> I do something like collection.update(key, {.. update..}, { .. upsert etc ..})
[16:39:03] <d-snp> http://pastie.org/10258708 this is what it looks like
[16:47:26] <cheeser> if you have an _id field in your update document
[17:37:19] <leandroa> hi. what happens to existing data on a collection when I create a TTL index? I mean, with documents that match the expiration rule
[21:19:25] <akoustik> i'm trying to set up a replica set with members in multiple AWS regions. i'm trying to make sure all our traffic between regions is encrypted. i'm assuming it's necessary and sufficient to enable SSL and make sure each host has the keypair. is that about right?
[21:25:29] <GothAlice> akoustik: Aye, and also really, really keep on top of OpenSSL updates.
[21:25:45] <GothAlice> akoustik: Things like heartbleed and weakdh are reality, these days.
[21:27:00] <GothAlice> akoustik: Within the datacenter I additionally use a private VLAN w/ IPsec. No actual firewall rules needed because literally only hosts on the authorized VLAN can communicate, and mongod only listens to that VLAN.
[21:28:53] <akoustik> GothAlice: cool. curious then, what benefit do you get from doing that in addition to just using restricted VPCs in AWS? just a little more confidence, or is it actually a stronger defense?
[21:29:30] <akoustik> (ie confidence in the case that amazon's security isn't as good as advertised, or something like that?)
[21:29:40] <GothAlice> I'm not technically on AWS, since the last time I was they ate my data despite cross-zone failures being against their SLA.
[21:29:47] <GothAlice> So for me it's VLAN or bust.
[21:30:25] <GothAlice> Notably their reliability. (It took me the 36 hours prior to getting my wisdom teeth extracted to reverse engineer the corrupted on-disk files… thanks EBS.)
[21:32:12] <akoustik> interesting. well, technically our data source is salesforce, so if/when AWS crashes/burns we at least don't actually lose data and have to worry about that while switching providers.
[21:32:53] <akoustik> main concern is just doing what we can to make transactions reasonably secure.
[21:35:16] <GothAlice> You say "transactions" in #mongodb, and I take a long quaff of my coffee. Using MongoDB for financial information?
[22:50:37] <rusty78> If I have a list of users, and I want to make sure no user can register a capitalization change like uSer1 instead of user1
[22:50:47] <rusty78> Anyone know a way to do this without actually grabbing all the users from my db?
[22:51:06] <GothAlice> Store usernames in their canonical form.
[22:51:40] <GothAlice> rusty78: This is much harder than you might expect. Ref: https://en.wikipedia.org/wiki/Unicode_normalization and http://www.unicode.org/reports/tr15/
[22:51:41] <rusty78> Sorry I am a noob, canoncial?
[22:51:49] <GothAlice> "The one true version of the username."
[22:52:08] <GothAlice> I.e. if you want to be case-insensitive, lowercase usernames anywhere they are entered and store them lower-case.
[22:52:36] <rusty78> Sorry here I worded the question poorly
[22:53:01] <rusty78> I do want to allow users to register names with capitalizations
[22:53:13] <rusty78> I just do not want a user to be able to register the same name with only a capitalization change
[22:53:38] <GothAlice> If user1 and uSer1 are the "same", lower-case the input (uSer1 -> user1) and then test.
[22:53:58] <GothAlice> Anywhere a username is entered by a user, lower-case it before doing anything else to it. Problem solved.
[22:54:27] <rusty78> Well what if I want to be able to return it
[22:54:47] <GothAlice> Then you will need to store both, but the mixed-case one is for display purposes only. You would continue to query the lower-case version.
[22:54:49] <rusty78> if someone registers uSER, I want to be able to return uSER as their name - I also want to check if the name is unique
[22:58:32] <GothAlice> Depending on your normalization, that can either be the single letter "e with grave accent", or it can be two: "grave accent" followed by an ASCII "e".
[22:58:46] <GothAlice> They look the same, but compare differently.
[22:58:51] <rusty78> right now I have my validator restricting it to alphanumeric only
[22:59:24] <rusty78> That produces a barrier I guess if you want to have cross-language websites
[23:37:38] <Kharahki> How would you add default data to a collection when it is created? I'm using mongoose and I have a json file that I want to 'import' into the collection when its first created. Is there a good way of doing this?