[08:28:21] <robscow> does MongoDB still not support updating a batch of docs with values from their own fields? ie, update all docs, set field1 to the value of field2
[11:26:08] <Rumbles> hi, can anyone advise how long I can continue to use the olf config file format? We moved to 3.0 recently and I noticed that YAML is the current default format... but we're still using the old format...
[12:23:51] <Rumbles> I'm reading up on the "correct" way to handle log rotation for mongo (in ubuntu) and I'm worried my config might do something undesired, I have the following config: http://fpaste.org/306903/
[12:24:32] <Rumbles> I am worried that with this config the mongo process will do the rotation after logrotate has rotated the log file (from the SIGUSR1 command) creating an extra empty rotated log file
[12:24:47] <Rumbles> is this the case? can anyone advise the correct way to handle rotation?
[12:30:44] <Rumbles> I was reading round on this and there is some varying advice, most of it seems to be pretty old
[12:31:07] <Rumbles> the manual describes how to use SIGUSR1, but doesn't mention any caveats
[17:13:44] <BurtyB> Rumbles, it seems backwards to me too as it it should use a prerotate otherwise I'd imagine you'll potentially lose entries rather than end up with an empty file
[17:14:36] <BurtyB> tho after the kill it's going to have some crazy name :/
[17:18:11] <Rumbles> okay, do you have a recommended way of handling rotation?
[17:18:18] <Rumbles> currently we have ever exapnding logs files
[17:21:46] <BurtyB> tbh I haven't set that part up yet either on my new db servers and on the old ones I just rotated and delted it which wasn't ideal either ;)
[17:22:02] <Rumbles> okay, thanks for your input :)
[17:22:09] <Rumbles> I'm going to test my config in dev tomorrow
[17:22:16] <Rumbles> I'll see if it works/what it breaks
[17:29:41] <GothAlice> Because most people when they pick new stripe sizes tend to pick power of two sizes without thinking about the BSON overhead, resulting in stripes that are a few bytes over standard power-of-two allocations, leaving dead zones.
[17:31:47] <StephenLynx> so if a file is smaller than that, it will use 255kb?
[17:32:09] <GothAlice> BurtyB: Ouch, yeah. I already do keyspace compression in the data access layer, so my own data doesn't benefit from that over-much.
[17:33:36] <GothAlice> StephenLynx: Well, no. It'll produce a single fs.chunks record for the file which will be whatever that file size happens to be + a few bytes of BSON overhead. It'll naturally scale to power-of-two sizes, so having lots of separate but exactly 1KB files is "bad".
[17:34:11] <GothAlice> But a large enough random set of files will average out that overhead quite nicely.
[17:34:11] <StephenLynx> so you only benefit if you can make it larger to fit files better?
[17:34:27] <StephenLynx> so you get less overhead?
[17:34:41] <GothAlice> Depends: do you need very fine-grained seeking within the files? Or do you want to minimize round-trips and getMore operations when streaming whole files?
[17:35:00] <StephenLynx> I think storage is the main issue here.
[17:35:09] <StephenLynx> the speed is not much of a concern.
[17:35:18] <StephenLynx> since bandwidth will always the slower.
[17:35:33] <GothAlice> There are trade-offs to larger and smaller chunk sizes, those two being the tip of the iceberg. Regardless of the chunk size, the last chunk will be the "remainder" of the file and whatever size (less than the chunk size) is required, hitting power-of-two padding.
[17:35:50] <GothAlice> The smaller the chunks, the more of them there are and thus a greater percentage of the storage and throughput will be overhead.
[17:36:23] <StephenLynx> so I guess that just leaving it be is the best. none of them and even I would know how to fine-tune that properly it seems.
[20:50:44] <soummyaah> Hello, I am new to mongodb and am currently dabbling with the problem of creating custom indexes. For example, is there any way possible to define my own indexing structure?
[21:05:51] <cheeser> depends on what that means to you
[21:13:09] <soummyaah> I've read that document and what I understand from it is that there are specific indexes that already exist. And you can create them such as text or geospatial.
[21:13:32] <soummyaah> But none of these work for, what I want to do is design my own indexing structure which caters to my need.
[21:13:51] <cheeser> you can index any combination of fields you want...
[21:14:05] <soummyaah> I essentially need a structure which handles spatial and textual domain. That is a set of words and space.
[21:15:19] <soummyaah> Also, I'd like to ask, where can read up on how efficient these structures are and what they use internally? It's a black box for me right now. :/
[21:18:09] <soummyaah> Can I use compound indexes to give weightage to what I'm querying for? As in can I say that I wish to give 0.4 weightage to space and 0.6 weightage to words while searching for similar documents?
[21:19:25] <cheeser> weight only applies to text indexes, i believe
[21:20:56] <soummyaah> Okay. So, is there any way to create my own index? Would modifying the codebase make it possible?
[21:21:28] <cheeser> you want to modify mongodb to support your weird hybrid indexing?
[21:21:50] <cheeser> that's ambitious to say the least but try asking the mongodb-dev mailing list
[21:26:03] <brotatochip> hey guys, my prod replica set just shit the bed… had this displayed for my primary: "infoMessage" : "RS101 reached beginning of local oplog [2]",
[21:26:31] <brotatochip> That’s no longer showing up but my primary stateSTR is FATAL
[21:27:05] <brotatochip> I have a 3 member set - primary, secondary, and arbiter
[21:27:29] <cheeser> did you recently have an election?
[21:28:06] <brotatochip> Looks like it, but it seems to be triggered by the primary entering the bad state
[21:28:12] <brotatochip> The secondary has been elected as the new primary
[21:28:32] <brotatochip> And it’s staying that way. Looks like the optimes are different as well:
[21:36:18] <jorsto> so i have an existing app on mongo already and want to add simple worker-based processing to it. I see some systems use findAndModify, but this only does 1 document at a time, are there any other ways I can have multiple workers find documents and "check them out" so that no workers select the same result-sets for processing?
[21:36:37] <cheeser> jorsto: yes, you can do that.
[21:37:07] <jorsto> i saw in other places to do a find, then update the found documents by id using $isolated, then do another find on those documents, process them, then unset the "processing" field with $isolated again
[21:37:14] <jorsto> seems like a lot of work for something which should be simple?
[21:37:22] <brotatochip> jorsto: how do I sync the rest after restoring from the backup?
[21:38:42] <joannac> brotatochip: also for the record, if you had a support contract you would almost certainly have someone on the phone walking you through this
[21:39:15] <brotatochip> I’m sure I would, don’t think that’s in our budget
[21:40:54] <brotatochip> Can’t even seem to find the cost of support anywhere
[21:41:40] <jorsto> @cheeser what method would work the best to ensure workers don't cross each other and get in fights and brawl eachother out
[22:02:45] <brotatochip> Doesn’t seem to be any other option than a resync
[22:07:42] <cheeser> hopefully your secondary is up to date
[22:08:09] <cheeser> is your backup a nightly mongodump?
[22:56:43] <brotatochip> cheeser nope, it’s a volume snapshot
[22:56:51] <brotatochip> looks like I may have experienced something similar to this http://grokbase.com/t/gg/mongodb-user/11aeq6s30p/clarification-on-rollback-and-opslog-coverage
[22:57:04] <brotatochip> cheeser i had to disable replication for now to get the site back up and running
[22:58:37] <brotatochip> Oh, yeah, looks like the secondary fell too far behind