[00:31:46] <_m> Like all other internet mediums, it's wild out hurrrrrr
[00:38:57] <acidjazz> can someone tell me why this small snippet of code fails to produce an image when is_file is false on line 12? http://pastebin.com/WwAQCWBW
[00:59:28] <acidjazz> here it is simplified, http://pastebin.com/KrJvdEhH, like i said when line 12 is false (is_file) and it has to write.. the browser says "invalid image" .. if i refresh.. works fine. 0 php errors/notices/etc 0 apache errors and so on..
[01:29:41] <Skillachie> I am new to using map-reduce and would really appreciate any help I can get
[02:10:36] <Antiarc> Hey folks, bit of an oddity --
[02:11:10] <Antiarc> I just upgraded my replica set from 2.0.4 to 2.2.0. 3 nodes, the 2 slaves upgraded just fine. I stepped down the master, verified that one of the slaves had become primary, and shutdown the primary to do the upgrade.
[02:11:58] <Antiarc> However, I re-checked my secondary and rs.status() was reporting two secondaries, one down, and no master.
[02:12:30] <Antiarc> Once my formerly-primary node finished resyncing and came back to available, it resumed as primary and things seem to be fine. There didn't appear to be any interruption of service, but rs.status() reporting no primary makes me mighty nervous.
[02:12:34] <Antiarc> Any idea what happened there?
[02:14:09] <Antiarc> I guessed tied election, but A) there was already a primary doing fine, and B) I'm not sure what to do about that; the doc advises against having an arbiter in replica sets with an odd number of members, which mine does. Should I bring an arbiter up next time I'm doing upgrades, so there are 3 nodes up and voting at all times?
[02:14:29] <Antiarc> (I was unable to add an arbiter to the set to cast a tiebreaking vote, as there was no primary to update the rs config on!)
[02:17:51] <Antiarc> Nevermind, reporting tools show an interruption of service. So yeah, that's bad.
[02:22:14] <Antiarc> I'm thinking what happened is:
[02:22:21] <Antiarc> node3 has an election weight of 3, the others have a weight of 1
[02:22:27] <Antiarc> I did NOT use rs.freeze on the primary before stepping it down
[02:22:34] <Antiarc> So between the stepdown and shutdown, it was re-elected master
[02:22:46] <Antiarc> Then, it shut down, leaving the RS without a master, and 2 nodes up that deadlocked on election.
[02:23:28] <Antiarc> So my question is, why do the docs say to NOT run an arbiter in a cluster with an odd number of nodes? Arbiters seem like they should only be used for tiebreaking, and when I lose a node in a 3-node cluster, I am down to 2 nodes and need an arbiter, but I don't want one for normal operations since that'd be four votes in the pool.
[03:20:36] <owen1> I use the Mongo DB Native NodeJS Driver - chains.update({x: data.id}, data, {safe: true, upsert: true}, function(err, chain) {}); but it's not upsertng and i have collections with the same x. when i try this from the console it's working as expected - db.chains.update({'x': 2},{'x': 2, 'y': 5}, true) any idea why?
[06:03:36] <ranjibd> is there a way to upgrade 2.0.7 replicasets to 2.2 without a downtime ?
[08:54:11] <gyre007> can I delete local/journal files from mongo live while mongo is running ? Should I not delete them at all ?
[09:09:21] <wereHamster> gyre007: better not delete them
[09:11:34] <nickbr> Hi, I'm having problems with the Mongo PHP driver, connecting to a replica set. I'm intermittently experiencing connection exceptions: couldn't determine master. I've seen posts about, like this one: https://groups.google.com/forum/?fromgroups=#!topic/mongodb-user/CPhQmpTW4GA
[09:11:46] <nickbr> However, I have tried closing the connection after performing a query, then performing the query again, and it still fails
[09:12:36] <nickbr> Most of the time it works well, but every so often it starts throwing exceptions and only works about 1/20 attempts
[09:16:38] <gyre007> wereHamster, how about the local" files ?
[09:17:08] <gyre007> its just that we are running out of disk space at horrible misallocated VM ....and the most of it is occupied by journal and local mongo files
[09:17:10] <wereHamster> gyre007: better not delete any files. Why are you intentionally trying to fsck up mongo?
[09:17:43] <wereHamster> can you afford any downtime?
[09:52:11] <Derick> nickbr: turn on MongoLog and it will tell you exactly what happens (with: MongoLog::setLevel( MongoLog::ALL ); MongoLog::setModule( MongoLog::ALL );
[11:00:56] <LoonaTick> Hi everybody. I've ran 'db.dropDatabase();' on all databases I have, but disk space is only freed half. Anybody know what to do?
[11:01:49] <wereHamster> LoonaTick: stop mongodb, delete the data directory
[12:05:01] <iksik> I'm trying to understand how replica sets works... and for now, i have some problems wih understanding figure number 4: http://goo.gl/KTHMJ ... how (for example) python client will know, to which hostname he can connect after first master failure?
[12:07:50] <kali> iksik: the driver grabs the replica layout when it opens the connection, but you can also start it with the names of the three servers
[12:16:08] <Skillachie> How do I implement this in map- reduce ? Complete description of problem here http://pastie.org/4894244
[12:16:57] <ppetermann> why would you want to m/r it if you can aggregate it?
[12:17:11] <kali> Skillachie: first of all, the value emit()ed should be of the same form than the return value of the reducer
[12:17:31] <kali> Skillachie: as data can transit several time through the reducer
[12:18:21] <kali> Skillachie: secondly, you don't have to emit the path:"..." in the mapper
[12:19:01] <Skillachie> ok. So how do i specify that the path in the emit should be aggregated if the path is within the path ? (hope this doe not sound confusing)
[12:20:33] <kali> Skillachie: you need to call emit for each bit actualy: for "a/b/c" you'll call emit("a", { size: sizeof_abc}), emit("a/b", <same value>), emit("a/b/c", <same value again>)
[12:21:29] <kali> Skillachie: that way when the reducer sums, you'll get all the sums
[13:04:48] <schlitzer|freihe> hey all, can someone tell some details why it is needed to shutdown a replicaset while updating from 2.0 -> 2.2 when authentication is enabled?
[13:06:52] <schlitzer|freihe> in a sharded setup, can i still switch config & mongos to 2.2 & have 2.0 shards with autentication enabled in the whole cluster?
[15:19:56] <LoonaTick> To clear up disk usage I was suggested here to clear up the data directories. Afterwards, I accidently created shards instead of sharded replicate sets. I now want to remove these shards again, but it complaints the slave is not ok and therefor some databases cannot be removed, and therefor not all shards can be removed. What should I do?
[15:20:55] <LoonaTick> Sorry for the most likely stupid question, I took over mongodb administration from a colleague, I have no experience with it but found out the data in there isn't correct. I decided to completely rebuild the data, but after dropping all databases the disk space wasn't cleared.
[15:22:06] <LoonaTick> Ah never mind, I'll remove the data files again and it will be "clean"
[15:26:28] <_m> Let me know if that works. I've never experienced said behaviour *and* dropped the collection.
[15:26:39] <_m> (Or are you saying you completely dropped the db?)
[15:27:07] <LoonaTick> _m: I already completely dropped the data files, setting it up from scratch (also made this decision to understand the inner workings better, by the way)
[15:27:49] <_m> The first few comments are insightful.
[15:29:56] <LoonaTick> _m: Thanks, but I don't think that applies to my previous situation, as I did both drop and repair. Perhaps there were some other databases though...
[15:30:23] <LoonaTick> allthough mongodb did not list them with 'show dbs;'
[15:32:48] <_m> Was the memory free to Mongo just not the filesystem?
[15:33:05] <LoonaTick> not to the filesystem, I did not check mongo though
[15:33:23] <_m> Then it could have been expected behaviour.
[15:33:59] <LoonaTick> ok, any clue what happened then?
[15:34:33] <_m> Without more information, I would assume you had memory free to mongo but not the filesystem
[15:35:02] <_m> That could be wildly incorrect, but there's no way to know at this point.
[15:35:33] <LoonaTick> ok. I will do some test to make sure, thanks! :)
[16:13:38] <LoonaTick> db.runCommand( { addshard: "rs_b/nosql1.backend.tc3:27022,nosql2.backend.tc3:27022,nosql3.backend.tc3:27022" } ); --> "can't add shard rs_b/nosql1.backend.tc3:27022,nosql2.backend.tc3:27022,nosql3.backend.tc3:27022 because a local database 'gpinternationalAvatars' exists in another rs_a:rs_a/nosql1.backend.tc3:27021,nosql2.backend.tc3:27021,nosql3.backend.tc3:27021"
[16:14:02] <LoonaTick> I have just enabled gpinternationalAvatars to be a sharded database (I have 1 shard) - should I wait for something to be initialized?
[16:22:21] <LoonaTick> Ah never mind, I just drop the databases and they can be added again (some script adds these databases)
[17:37:41] <dforce> I am using pymongo and django and want to see the ObjectId. Derick pointed me to http://villaroad.com/2010/02/return-mongodb-ids-inside-django/ but I get an error within the python script. Can someone help me solve this problem ?
[17:39:34] <Lujeni> dforce, you want return the id of document in your template ?
[18:35:13] <jmpf> sometimes mongodump will fail http://pastie.org/private/l9tlr2noe1mrv9vled2gha <--- but `df -h` shows plenty of space on both the server and this remote host that we are dumping too; I do note that virt mem is @ 64.5G && total mem is 66G
[18:50:19] <JmZ> installed via the gen10 repo on centos5. why does it feel the need to allocate 3GB+ of space for 'journal' files?
[18:51:02] <JmZ> i can't start it because there isn't 3GB of free space, why would it need that much? It is a clean install, there's no need for using such an amount of storage. am i missing something? maybe a setting?
[18:53:46] <_m> JmZ: You can set the jounral size to be smaller by using the command line "--smallfiles"
[18:55:33] <JmZ> _m: so 'they' determined that 3GB is the optimal journal size?
[18:55:52] <_m> Which I got from googling "MongoDB journal" and skimming the first 4 results.
[18:56:23] <_m> And yes, they use said journal size by default.
[18:56:45] <JmZ> _m: the error its self says to use --smallfiles, but not *why*. so less 'look harder next time' implications please. came here for clarification
[19:02:18] <_m> As that's a 44-minute presentation, I haven't filtered it for your answer. However, " we'll give an overview of durability with MongoDB, demo journaling, and discuss journaling internals."
[19:15:37] <leehambley> is there such a thing as a blocking findAndModify for MongoDB? I'm trying to re-write this queue implementation backed by MongoDB https://gist.github.com/25258909667d3b5ff75a
[19:38:47] <alaska> If I have a collection indexed by a unique string say "name", and I want to do concurrency checks on a revision key "_rev" does it help me to index _rev even though all updates will refer to both the primary key and the _rev?
[19:40:47] <alaska> Basically, I'm ensuring that I know if someone has touched an entry since I pulled it from the server. When I go to update, I update based off of both {name: 'name I pulled', _rev: 'rev I pulled'} and I $inc _rev. That way, if the rev has changed in the meantime, it won't update and I know it's stale.
[19:58:19] <KyleG> So I've searched and searched and it seems like it may be a new issue limited to 2.2….I'm seeing the following in all my secondary nodes mongo logs: Tue Oct 2 12:49:03 [rsSync] info DFM::findAll(): extent e:3f51ee00 was empty, skipping ahead. ns:local.replset.minvalid
[19:58:39] <KyleG> and then the following in my cfg server logs: Tue Oct 2 12:56:28 [conn11002813] problem detected during query over config.locks : { $err: "Invalid BSONObj size: 0 (0x00000000) first element: EOO", code: 10334 } which follows a generated stack trace
[20:03:20] <KyleG> Is it something to worry about?
[21:40:55] <leehambley> there's no blocking version of findAndModify, right ?
[21:41:09] <leehambley> trying to replicate standard queue behaviour
[21:55:43] <systemizer> anyone have an issue when a replica gets into a rollback state, but it doesn't create a rollback directory in the dbpath?
[22:21:41] <timeturner> should I ever be worried about exceeding the document size limit
[22:21:51] <timeturner> I'm embedding comments in a post document
[22:22:40] <timeturner> at first I wasn't embedding the comments at all and I was simply referencing the post's ID in each comment. comments and posts were stored in separate collections
[22:22:50] <timeturner> then I thought about embedding only ObjectIds
[22:23:00] <timeturner> but that could grow out of bounds too
[22:23:26] <timeturner> and finally I thought about embedding the whole comment documents into an array in the post
[22:27:21] <_m> timeturner: You'll probably want to use a separate collection for comments
[22:28:12] <_m> As you stated, you could run out-of-bounds using embedded documents.
[22:39:04] <Skillachie> kali: got it solved thanks for pointing me in the right direction this morning
[23:33:27] <Killswitch> Hey guys, I did mongodump, and I was wondering is it possible to just import a single bson file into an existing collection using mongorestore?
[23:59:18] <scoates> more findandmodify problems. )-: Tue Oct 2 23:57:07 [conn9412980] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: gimmebar.jobqueue