PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 31st of December, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:49:34] <kjp> hello: is it possible to read the oplog?
[02:29:36] <joannac> kjp: of course, local.oplog.rs
[02:29:51] <kjp> hmm, that's what I thought
[02:30:12] <kjp> but I tried it and it doesn't seem to do what I (naively) expect
[02:31:36] <kjp> or, perhaps I'm querying it wrong - I'm using the shell, and doing this:
[02:31:38] <kjp> use local
[02:31:46] <kjp> db.oplog.rs.find()
[02:32:08] <kjp> this seems to return only one object
[02:52:27] <dougb> is it common for mongodb to start flushing connections once a certain threadshold of connections are open? i'm using golang (mgo) and when it hits around 1k connections, the app haults and mongo starts to flush those connections. The app isn't outputting any errors when it crashes
[03:16:16] <joannac> dougb: ulimits?
[03:19:43] <dougb> joannac: I saw a mention of that, but wasn't sure what param was relevant? This is my output: http://pastebin.com/0P3UCDyP
[03:25:08] <ranman> dougb: ulimit -n is normally what you care about
[03:25:24] <ranman> 1024 is reasonable but you can safely raise that to 20k
[03:25:35] <ranman> (mongodb's built in limit)
[03:25:58] <dougb> ok so if I increase that then it should increase the max connections as well?
[03:26:20] <ranman> yes
[03:26:32] <ranman> each socket takes a file in unix land
[03:26:41] <ranman> so open files translates to open things
[03:26:47] <dougb> i read that it doesn't matter what I set the mongodb conf to if that is low, which it is low. We're running this on a super beefy server (32 GB ram, 6+ processors, etc.)
[03:27:04] <dougb> ok, that makes sense
[03:27:27] <dougb> after I set that to 20k do I just need to reboot mongodb?
[03:28:12] <ranman> yup, depending on the OS those settings may not persist
[03:28:23] <dougb> i'm on ubuntu 12.04
[03:28:28] <dougb> i just set ulimit -n 20000
[03:29:33] <ranman> dougb you might also want to edit limits.conf
[03:31:17] <ranman> http://docs.mongodb.org/manual/reference/ulimit/
[03:52:57] <dougb> thanks ranman, I set ulimit -n 64000 and in the limits.conf file I set: * hard nofile 64000 * soft nofile 64000
[03:53:15] <ranman> dougb: and it's fixed now?
[03:54:29] <dougb> I have not tested yet, it hasn't 'crashed' it's hovering around 1029 connections. I'm going to pause the app and ween off users then restart mongodb
[04:43:15] <zumba_addict> evening folks. I would like to use mongo as my storage. My app is bodybuilding which is going to be displaying exercises per day. The exercise are different for each weekday. The fields I'm planning to use are username, password, email, phone, exercises, time started, date started, time completed, date completed. The exercises will be around 40-50 different exercises. They are static but can be grouped into 3, beginne
[04:43:16] <zumba_addict> medium and advance. Do I need 2 collections? What are your recommendation?
[07:21:16] <roger_rabbit> greets
[07:21:21] <retran> hot eats
[07:22:16] <roger_rabbit> how are you, sir?
[07:22:55] <roger_rabbit> might by chance you have expertise in "big-data"
[11:33:05] <trbs> mongodb dies for me with an "out of memory" error on tcmalloc while building an index on a secondary :(
[11:34:13] <trbs> error is: "tcmalloc: large alloc 18446744073336291328 bytes == (nil) @ " then followed by the stack trace
[12:14:17] <kali> trbs: which version are you using ? is it reproducible ?
[12:16:00] <trbs> kali, db version v2.4.8
[12:18:10] <trbs> Kaim, it's reproducible in the sense that it happens again when I start mongo again. It recovers the journal, connects to replicaset, then decides to build the index and crashes.
[12:19:39] <kali> well, it's obviously a bug... if you want to investigate it, i can't really help. if you want to solve it, i would start by scratching the replica
[12:23:40] <trbs> I'm rebuilding the secondary now, since this is a production system. Hoping that it will work when rebuilding it will work again.
[12:24:06] <trbs> Also working on a ticket in Mongo's Jira
[17:13:20] <kjp> are there circumstances where transactions are not recorded to the oplog?
[17:14:22] <kjp> I set up a new mongo server and initialized it as the primary for a replica set, but nothing is being added to the oplog when I do normal writes..
[17:15:06] <kjp> my speculation is that it doesn't write to the oplog unless there's at least two replicas in the set. Is this the case?
[17:18:59] <cheeser> the oplog is what is used to repliate so that cant' e true.
[17:21:06] <kjp> hm
[17:21:35] <kjp> yeah the only entries that are appearing in the oplog are transactions pertaining to the replica set itself
[17:21:49] <kjp> ("initializing set", "Reconfig set")
[17:21:52] <cheeser> do you have any replicas configured?
[17:21:58] <kjp> I do now
[17:22:27] <ron> "I do now" is very close to "I don't know"
[17:22:45] <kjp> heh
[17:22:50] <kjp> I have one replica set up
[17:23:03] <kjp> just created it to see if that would make a difference, but it didn't
[17:24:09] <kjp> do you have to configure collections and/or dbs to be replicated?
[17:24:13] <cheeser> if all you have a is apreimary, there's no reason to record an oplog
[17:24:24] <cheeser> you don't
[17:24:36] <kjp> I have a primary and one secondary now
[17:24:45] <kjp> but still nothing new in oplog
[17:24:51] <cheeser> you need at last 3 in a replica set
[17:25:02] <kjp> need, or recommend?
[17:25:07] <cheeser> you can't have a rpimary with 2 members.
[17:25:38] <kali> cheeser: you can, if they are up
[17:25:44] <kjp> rs.status() seems healthy: one node marked primary; the other secondary
[17:25:56] <cheeser> kali: it's a nonfunctional setup
[17:26:00] <kjp> I get that an odd number of replicas is risky in failure scenarios
[17:26:20] <kali> cheeser: it's functional, but dangerous
[17:26:30] <cheeser> how? you can't write to it.
[17:26:34] <kjp> but presumably the replica set can function with an even number of replicas, otherwise how would it work during a failure :)
[17:26:37] <kali> cheeser: yes you can. try it.
[17:26:51] <cheeser> i was pretty sure it'd fail to elect a primary with just 2.
[17:27:03] <cheeser> but perhaps i'm wrong.
[17:27:04] <kali> cheeser: it fails when one of the two is down
[17:27:10] <cheeser> maybe that's it.
[17:27:12] <kali> cheeser: that's why it is dangerous
[17:27:51] <kali> cheeser: tbh, i think it would be saner if it would just fail. many people have shot themselves in the foot with this
[17:28:21] <kjp> huh
[17:28:28] <cheeser> well, it *is* a specialized case of even numbered members, really
[17:28:43] <kjp> oh wait, the local db is not replicated is it
[17:28:54] <cheeser> though an even number of members vs even number of reachable members is very different
[17:29:04] <cheeser> no. local is not replicated.
[17:29:11] <kali> kjp: never use "local", "admin" and "config" dbs, they're kinda weird :)
[17:29:15] <kjp> ok, that's my problem :)
[17:29:16] <cheeser> that's why it's called local ;)
[17:29:22] <kjp> yeah, I just put that together
[17:29:45] <kali> kjp: stay away from anything called "system" too :)
[17:30:05] <cheeser> auth and indexes go under there :)
[17:30:43] <kjp> hm, indexes are replicated, no?
[17:31:46] <kjp> cool, now it's working, thanks kali, cheeser
[17:32:16] <cheeser> sure
[17:32:39] <kjp> is the format of the oplog documented anywhere? If I were to make use of the oplog for my own purposes (read-only of course) would that be playing with fire?
[17:33:24] <cheeser> no, that'sa common thing.
[17:33:31] <cheeser> it's how MMS works, i believe.
[17:33:37] <kjp> mms?
[17:33:50] <ron> Multimedia Messaging Service
[17:35:08] <cheeser> http://mongodb.com/mms
[17:35:28] <kjp> ah, cool
[20:51:58] <kali> unbelievable how bad the shit the kids listens today can be
[21:20:55] <wildstrangething> Hey guys, if I have a mongodb being written to constantly (every second), will I be better off with Digital Ocean or Linode? Currently its CPU bound on DO 1CPU 512MB RAM. With DO 2CPU 2GB RAM, the load average is 1.2 which is more acceptable. Should I use Linode 8CPU 1GB RAM which costs the same as DO 2CPU? It doesn't have SSD though
[21:33:22] <kali> wildstrangething: i have no idea what the spec for these vms are, but target for more RAM and faster IO. unless you're using intensively aggregation pipeline, cpu makes no difference whatsoever
[21:37:39] <wildstrangething> kali: Do I happen to be CPU bound rather than IO/MEM bound
[21:38:34] <wildstrangething> My mongostat looks like this: https://www.dropbox.com/s/73cvgc0ktug0aip/mongostat.png
[21:39:39] <kali> what do you update look like ?
[21:39:49] <kali> and the typical document ?
[21:41:44] <kali> i doubt 2 cpus ar egonna help, write are serialized in one single thread anyway
[21:44:36] <wildstrangething> kali: Here's the (Python) code that runs continuously in an infinite loop: http://paste.laravel.com/1iXO
[21:45:16] <wildstrangething> Basically it searches for docs, if they exist do an update, else do an insert
[21:46:02] <wildstrangething> In the 2 CPU server, I notice that each core is no longer at 100%
[21:46:15] <wildstrangething> The load average is 1.60 1.51 1.36
[21:47:14] <wildstrangething> I am thinking that if I were to run multiple instances of the Python code that writes to mongodb, will I be CPU bound again with 2 CPU
[21:47:49] <wildstrangething> The different instances of the python code will be doing similar operations, just with different data
[21:48:17] <wildstrangething> The python processes use very little CPU, its mainly mongodb
[21:57:05] <kali> wildstrangething: the write are mutually exclusive, only one cpu is writting at one given time.
[21:57:52] <kali> wildstrangething: do you have an index on b.timestamp ?
[21:58:10] <kali> wildstrangething: and on c.price ?
[21:59:29] <wildstrangething> kali: oh no I don't, let me try adding the indexes
[22:04:53] <kali> a.timestamp too, actualy. you can make that one unique, so you can insert to a without the if-exists check (which has a race condition)
[22:06:17] <kali> i think the b update or insert can be simplified too.
[22:42:52] <Cheekio> Happy New Years all
[22:44:19] <Cheekio> I think I'm working with a DB that has been replicated, and my django app is looking for a 'master' instance.
[22:44:46] <Cheekio> Any simple way to kick this up puppy up in stature?
[22:46:44] <Cheekio> I found the offending information:
[22:47:20] <Cheekio> "repl" : { "ismaster" : false, "secondary" : false, ...
[22:47:38] <Cheekio> that was in the output from db.serverStatus();
[22:55:15] <kali> cheeser: run a rs.status() and paste the output somewhere
[22:56:52] <kali> Cheekio: and you too :)
[22:57:28] <Cheekio> http://pastebin.com/mSR8dfc0
[22:57:31] <Cheekio> thanks kali :D
[22:57:55] <Cheekio> I tried running rs.slaveOk(); but really I want it to be a master. As far as I know, there is no replication going on
[22:58:04] <Cheekio> At least I don't know how to check that
[22:58:12] <Cheekio> And in theory we're running one mongodb on one server
[22:58:27] <Cheekio> Wait- in actuality. I know for a fact this server is supposed to be doing its own thing
[22:58:47] <Cheekio> It's a test environment, so I should be able to wipe the thing if I needed to, no strings attached.
[23:00:06] <kali> Cheekio: well, how many server are running ?
[23:00:27] <kali> if there is only one, i doubt it's configured as a one-node replicaset, but who knows
[23:01:01] <Cheekio> One, I believe
[23:01:11] <Cheekio> We're connecting to it from another server that's running a django app
[23:03:17] <kali> Cheekio: is it possible somebody tried to migrate the setup to a replication one and bail out in the middle of it ? :)
[23:05:41] <Cheekio> I don't think so
[23:05:51] <Cheekio> Hmm
[23:06:10] <Cheekio> Maybe they cloned the original, non-test server too well, and that was actually a replica
[23:06:23] <kali> ok. check out the command line options your mongod instance is running with, and also the configuration file
[23:06:27] <Cheekio> Shouldn't it be trivial to switch master: false to master: true?
[23:07:10] <kali> that's not how it replica set works
[23:07:40] <Cheekio> Well I'm glad you understand what I'm saying enough to tell me I'm wrong :D
[23:07:52] <Cheekio> What would I be looking for in the configuration?
[23:08:32] <kali> you're looking for "replSet"
[23:08:34] <Cheekio> In the last handful of options (looks like a stock configuration file with minor changes)
[23:08:37] <Cheekio> yeah
[23:08:39] <Cheekio> replSet = rs0
[23:08:51] <Cheekio> #slave = true; #master = true
[23:09:00] <Cheekio> (both commented out)
[23:09:22] <kali> yeah yeah, forget about slave and master, that's more or less obsolete
[23:09:32] <kali> ok, so we have a broken replica set
[23:09:49] <kali> and you think it should be standalone, right ?
[23:10:57] <Cheekio> yes, definitely
[23:11:17] <kali> then comment this line in the config and restart your server
[23:11:46] <Cheekio> I wouldn't have guessed that
[23:11:49] <Cheekio> Thanks
[23:12:16] <Cheekio> Nice!
[23:12:21] <Cheekio> Mongodb is connected like a pro
[23:12:30] <kali> ok. it's a testing environment, right ?
[23:12:36] <Cheekio> Yes
[23:12:40] <kali> good
[23:13:40] <Cheekio> Absolutely legit
[23:36:07] <node123> Hey I'm having trouble storing a dictionary in mongo.. it only seems to work if I serialize it into json first
[23:36:21] <node123> It should just work right? Do I have to do anything special?
[23:48:44] <kali> node123: what exactly do you mean ? dictionary as in oxford ? or as in hashmap ?