[09:29:07] <kurushiyama> I have just pushed a new version of https://github.com/mwmahlberg/incomplete-returns
[09:29:07] <kurushiyama> A little background, first: David Glasser wrote a blog post (https://engineering.meteor.com/mongodb-queries-dont-always-return-all-matching-documents-654b6594a827#.8rgfxd1u3), asserting that MongoDB would return incomplete result sets.
[09:29:08] <kurushiyama> However, he did not disclose the technical means by which he came to that conclusion, and actively fights doing so.
[09:29:08] <kurushiyama> Hence, I wrote incomplete-returns to prove him wrong.
[09:29:10] <kurushiyama> It is a well known problem in the MongoDB world that as per eventual consistency, the .count method is not reliable, since it counts the index entries instead of doing the actual query and iterate over the result set.
[09:29:10] <kurushiyama> Which basically is what incomplete-returns does: It inserts some data into a collection (*doInit), and then heavily modifies this data (10 goroutines on their own socket connections, concurrently). 10 other goroutines read that data, and in case there is a mismatch of what count returns and the expected result, it freezes all operations (I used an RWMutex for that), and counts the actual result set of the cursor. incomplete-returns
[09:29:12] <kurushiyama> actively tries to prove David's point,
[09:29:12] <kurushiyama> in a way that if the result set is any different from what can be expected, incomplete-returns stops, acknowledging the mismatch.
[09:29:14] <kurushiyama> If the number of documents in the result set matches the expected number, the goroutines are unfreezed.
[09:29:14] <kurushiyama> Caveat: As soon as the first mismatch is found, the 10 writing goroutines are freezed. On unfreeze, the all write pretty much at the same time, causing the next count likely to mismatch.
[09:30:31] <kurushiyama> @Zelest May I offer to compile incomplete-returns for you so that you can run it against test machines, if you happen to have some?
[09:31:18] <Zelest> don't get me wrong, i'm bored at work, but i do have tons of stuff to do, sadly. :(
[09:33:14] <kurushiyama> I have tried to prove his point. Really. I gave myself a hard time. I could not find a _single_ instance where the result set was off.
[10:40:28] <Zelest> ugh, the toDateTime() should convert it to the current timezone :S
[10:42:33] <Zelest> Derick, if I run $date = new DateTime(), it defaults to the timezone of date.timezone.. shouldn't MongoDB\BSON\UTCDateTime::toDateTime() do the same?
[10:54:15] <Derick> UTCDateTime doesn't do timezones
[10:54:27] <Derick> and, I don't want to make the assumption, so it's UTC for now
[10:54:40] <Derick> and as it's part of the public API, I can't change that either ;-)
[10:59:50] <ceegee> we would like to upgrade from version 2.6 to 3.2 on debian jessie. upgrade guide tells me to go over version 3.0, but I can not find 3.0 packages for jessie. should we use wheezy packages instead? whats the best practice for doing this?
[11:01:04] <ceegee> installed version comes from this repo "deb http://downloads-distro.mongodb.org/repo/debian-sysvinit dist 10gen"
[11:33:04] <Zelest> Derick, well, i find it quite good to have it "normalized" serverside.. but the serialize/unserialize part of it should convert it from/to the current/local timezime imho
[14:06:01] <idioglossia> So if I'm supposed to treat documents as rows to a table, I should treat a database as the table (if I am understanding the performance best practices guide correctly)
[14:06:08] <idioglossia> i.e. a database for Users
[14:06:14] <idioglossia> and each user has a document.
[14:06:35] <diegoaguilar> ermm ... a database is a database, a table is a collection
[14:06:42] <diegoaguilar> a row in the table is a document
[14:10:14] <Slashman> hello, the mongodb repo on for ubuntu xenial seems to exist at "repo.mongodb.org/apt/ubuntu/dists/xenial/" but is not referenced on the documentation nor on the download page, is the package ready or this a pre-release?
[14:10:15] <idioglossia> It's incredible to think that it was for children.
[14:10:23] <diegoaguilar> idioglossia, maybe u should read this https://www.mongodb.com/nosql-explained
[14:10:53] <idioglossia> diegoaguilar, I just finished this video series on mongo. just forgot about collections since I've been working in the same one this whole time D:
[14:10:54] <diegoaguilar> well most of cartoons nowadays are like "adult -ish" content
[14:11:17] <diegoaguilar> I worked with Mongo a lot last ... 3 years
[14:11:31] <diegoaguilar> now working with Postgres
[14:12:53] <idioglossia> Okay so if I am storing user data over time, would it be ideal to roll over to a new document every 6-12 months?
[14:13:04] <idioglossia> like you keep a record of ALL user actions, for stats purposes
[14:13:35] <idioglossia> assume the user is as active as possible, so every six months they generate more than a dozen mb worth of event records
[14:30:18] <idioglossia> Do documents have names, or are they identified by ID only?
[14:31:09] <idioglossia> ah I think I figured it out. I roll back events at the collection level.
[14:31:41] <idioglossia> db.events.* documents get rolled back to db.events-some-date.* and db.events stays the working set
[15:29:22] <jayjo_> I have a directory of json files that I've downloaded and want to import with mongoimport... can I recursively import a directory of json files, where each file is json separated by newline?
[17:14:28] <Ben_1> I'm using the async mongoDB driver for java and try to query data from a collection which does not exist yet by using .find().forEach(block, listener). I thought the "block" callback is called one time but in my case it isn't. Someone an idea how I can be informed about that there is no result?
[17:28:31] <diegoaguilar> can u share more code? Ben_1
[17:28:42] <diegoaguilar> honestly this is MongoDB only channel but I might suggest something
[17:58:33] <Ben_1> diegoaguilar: this is a mongoDB only channel and it is a mongoDB problem :P
[17:59:00] <Ben_1> the Block Callback is not called
[18:51:56] <jayjo_> I can't seem to connect to a mongdb I just created on aws ec2 instance, with ports opened on the instance itself for 27017. Do I need to do something at the db level to allow external access?
[18:55:19] <cheeser> are you bound to the IP address or localhost?
[18:57:18] <jayjo_> cheeser: when i run mongod it shows the internal ip. do I specify it to localhost?
[19:04:38] <jayjo_> I've set the bindIp to 0.0.0.0 and still can't connect
[19:05:25] <Ben_1> I'm using the async mongoDB driver for java and try to query data from a collection which does not exist yet by using .find().forEach(block, listener). I thought the "block" callback is called one time but in my case it isn't. Someone an idea how I can be informed about that there is no result?
[19:05:37] <Ben_1> here is the code http://pastebin.com/pcUAA3rz
[19:05:57] <Ben_1> but I've a small paste failure recordCollection.find instead of collection.find
[19:07:54] <cheeser> Ben_1: you should post to mongodb-users so Ross can see it
[19:08:00] <cheeser> jayjo_: you restarted mongod?
[19:20:01] <jayjo_> cheeser: i did... i have the logs here. it might be somethign with txn-recover? https://bpaste.net/show/fae78852da04
[19:27:14] <Ben_1> cheeser: is it normal that my post is not visible? somebody have to active my post?
[19:46:15] <jayjo_> Also... trying to authenticate on the command line is now showing 2016-06-09T19:37:01.283+0000 I ACCESS [conn5] SCRAM-SHA-1 authentication failed for jared on test from client 127.0.0.1 ; UserNotFound: Could not find user jared@test
[19:46:37] <jayjo_> But I created the user and got a success message
[19:56:39] <jayjo_> running sudo netstat -tulpn | grep 27017 shows it is listening on 0.0.0.0
[20:03:27] <jayjo_> I just think that something wasn't flushing. I didn't make a change and it works. Maybe it was amazon iptables or something
[20:50:31] <jayjo_> Yea so it is slowly growing. Currently at 30% but it will crash eventually before the whole upload is complete. Is this a standard problem? What's the workaround?
[21:22:19] <Ben_1> cheeser: as I thought, someone had to active my post, now I can see it :)