[02:32:08] <kjp> this seems to return only one object
[02:52:27] <dougb> is it common for mongodb to start flushing connections once a certain threadshold of connections are open? i'm using golang (mgo) and when it hits around 1k connections, the app haults and mongo starts to flush those connections. The app isn't outputting any errors when it crashes
[03:26:32] <ranman> each socket takes a file in unix land
[03:26:41] <ranman> so open files translates to open things
[03:26:47] <dougb> i read that it doesn't matter what I set the mongodb conf to if that is low, which it is low. We're running this on a super beefy server (32 GB ram, 6+ processors, etc.)
[03:54:29] <dougb> I have not tested yet, it hasn't 'crashed' it's hovering around 1029 connections. I'm going to pause the app and ween off users then restart mongodb
[04:43:15] <zumba_addict> evening folks. I would like to use mongo as my storage. My app is bodybuilding which is going to be displaying exercises per day. The exercise are different for each weekday. The fields I'm planning to use are username, password, email, phone, exercises, time started, date started, time completed, date completed. The exercises will be around 40-50 different exercises. They are static but can be grouped into 3, beginne
[04:43:16] <zumba_addict> medium and advance. Do I need 2 collections? What are your recommendation?
[12:18:10] <trbs> Kaim, it's reproducible in the sense that it happens again when I start mongo again. It recovers the journal, connects to replicaset, then decides to build the index and crashes.
[12:19:39] <kali> well, it's obviously a bug... if you want to investigate it, i can't really help. if you want to solve it, i would start by scratching the replica
[12:23:40] <trbs> I'm rebuilding the secondary now, since this is a production system. Hoping that it will work when rebuilding it will work again.
[12:24:06] <trbs> Also working on a ticket in Mongo's Jira
[17:13:20] <kjp> are there circumstances where transactions are not recorded to the oplog?
[17:14:22] <kjp> I set up a new mongo server and initialized it as the primary for a replica set, but nothing is being added to the oplog when I do normal writes..
[17:15:06] <kjp> my speculation is that it doesn't write to the oplog unless there's at least two replicas in the set. Is this the case?
[17:18:59] <cheeser> the oplog is what is used to repliate so that cant' e true.
[17:32:39] <kjp> is the format of the oplog documented anywhere? If I were to make use of the oplog for my own purposes (read-only of course) would that be playing with fire?
[20:51:58] <kali> unbelievable how bad the shit the kids listens today can be
[21:20:55] <wildstrangething> Hey guys, if I have a mongodb being written to constantly (every second), will I be better off with Digital Ocean or Linode? Currently its CPU bound on DO 1CPU 512MB RAM. With DO 2CPU 2GB RAM, the load average is 1.2 which is more acceptable. Should I use Linode 8CPU 1GB RAM which costs the same as DO 2CPU? It doesn't have SSD though
[21:33:22] <kali> wildstrangething: i have no idea what the spec for these vms are, but target for more RAM and faster IO. unless you're using intensively aggregation pipeline, cpu makes no difference whatsoever
[21:37:39] <wildstrangething> kali: Do I happen to be CPU bound rather than IO/MEM bound
[21:38:34] <wildstrangething> My mongostat looks like this: https://www.dropbox.com/s/73cvgc0ktug0aip/mongostat.png
[21:41:44] <kali> i doubt 2 cpus ar egonna help, write are serialized in one single thread anyway
[21:44:36] <wildstrangething> kali: Here's the (Python) code that runs continuously in an infinite loop: http://paste.laravel.com/1iXO
[21:45:16] <wildstrangething> Basically it searches for docs, if they exist do an update, else do an insert
[21:46:02] <wildstrangething> In the 2 CPU server, I notice that each core is no longer at 100%
[21:46:15] <wildstrangething> The load average is 1.60 1.51 1.36
[21:47:14] <wildstrangething> I am thinking that if I were to run multiple instances of the Python code that writes to mongodb, will I be CPU bound again with 2 CPU
[21:47:49] <wildstrangething> The different instances of the python code will be doing similar operations, just with different data
[21:48:17] <wildstrangething> The python processes use very little CPU, its mainly mongodb
[21:57:05] <kali> wildstrangething: the write are mutually exclusive, only one cpu is writting at one given time.
[21:57:52] <kali> wildstrangething: do you have an index on b.timestamp ?
[21:58:10] <kali> wildstrangething: and on c.price ?
[21:59:29] <wildstrangething> kali: oh no I don't, let me try adding the indexes
[22:04:53] <kali> a.timestamp too, actualy. you can make that one unique, so you can insert to a without the if-exists check (which has a race condition)
[22:06:17] <kali> i think the b update or insert can be simplified too.