PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 1st of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:24:41] <GuiPoa> what is the best way to improve IO? Setting w to 0 is gonna help?
[01:17:16] <VooDooNOFX> GuiPoa: Measure, then implement.
[01:21:17] <GuiPoa> VooDooNOFX, yes, im gonna measure with w=0, but i would like if theres anything else to do..
[01:21:45] <VooDooNOFX> Well, why do you believe you have poor io now?
[01:27:02] <GuiPoa> I dont. I have a high volumn right now. I need to decrease it.
[01:30:19] <VooDooNOFX> In my world, I don't prematurely optimize. I would recommend you don't do it either.
[01:59:58] <GuiPoa> VooDooNOFX, i did not understand. Now I have a high usage of IO. I should decrease it. Im looking for some strategies.
[02:02:42] <VooDooNOFX> GuiPoa: Ok. So we're back to my original question. What is telling you that your IO is too high?
[02:05:46] <GuiPoa> I have a chart (semon) showing the usage. Besides that, my system went down...
[02:07:00] <GuiPoa> But i really dont know if set w=0 is gonna improve IO.
[02:10:37] <VooDooNOFX> GuiPoa: Neither do we. That's not enough information to help you. I can offer some basic suggestions, but without knowing why it failed, or what type of queries you do, I cannot offer anything specific.
[02:10:57] <VooDooNOFX> GuiPoa: Most of the time, you need to optimize your queries, not your database.
[02:11:14] <VooDooNOFX> Also, you should track mongoDB performance with MMS (free).
[02:11:32] <nicken> any suggestions for mongodb web interfaces?
[02:11:51] <VooDooNOFX> nicken, I use rockmongo.
[02:12:22] <nicken> is it simple to get set up?
[02:12:27] <GuiPoa> VooDooNOFX, we do many inserts
[02:12:29] <VooDooNOFX> I thought so, yes.
[02:12:48] <VooDooNOFX> nicken: it requires php5, which was a problem back on RHEL 5 for us, but not anymore on RHEL 6
[02:13:20] <VooDooNOFX> nicken: However, many others exist also: http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
[02:14:33] <GuiPoa> 1 insert,1read per user.
[02:14:35] <nicken> RockMongo looks pretty decent.
[02:15:02] <GuiPoa> but this document expires in few min
[02:16:24] <nicken> I'm seeing there's also an OS X client.
[02:18:35] <VooDooNOFX> nicken: There is. Similar to rockmongo, but requires osx.
[02:18:58] <GuiPoa> VooDooNOFX, so every 15min, we have an insert/read per user. but we are simulating many users at same time.
[02:20:32] <nicken> yeah, I have OS X, and I actually got it working already, and I'm seeing all of my collections, so it looks like it's working.
[02:21:11] <retran> i wonder what would OSX system be useful for
[02:21:28] <retran> you gonna do mongo + photoshop
[02:21:35] <nicken> awesome, it's working beautifully.
[02:21:53] <VooDooNOFX> GuiPoa: That shouldn't cause an issue. How many users are you simulating?
[02:22:29] <VooDooNOFX> retran: Welcome back from 1996. OSX is now used for a lot more than photoshop.
[02:22:41] <retran> oh you're right... i forgot final cut
[02:22:47] <retran> itunes
[02:23:10] <retran> i have mac mini servers at macminivault.com just for the itunes
[02:23:17] <VooDooNOFX> retran: don't be silly
[02:23:26] <retran> awww
[02:23:35] <GuiPoa> VooDooNOFX, 100.000 at same time. We have a shard with 3 replicasets. 1 primary, 2 secondaries
[02:24:28] <retran> in 1996 OS X didnt exist :(
[02:24:41] <VooDooNOFX> haha, I was hoping you wouldn't catch that
[02:25:13] <retran> i cant wait to get my mac mini back, it's sitting in some datacenter right now
[02:25:36] <VooDooNOFX> I use a mac pro at work, at a hackintosh at home.
[02:25:53] <retran> were you able to build a hackintosh for any cheaper than a mac mini?
[02:26:10] <VooDooNOFX> retran: it cost about 1100, and it was faster than the mac pro.
[02:26:13] <retran> all the compatable componants lists i saw, compiled the prices together at newegg
[02:26:31] <retran> yeah, that's the total i get, around ~1000
[02:26:45] <VooDooNOFX> There's a big different between a mac pro capable machine, and something designed to compete with a mini
[02:26:50] <retran> sure
[02:26:55] <retran> mini's are puny
[02:27:02] <VooDooNOFX> and not very good at multitasking.
[02:27:28] <VooDooNOFX> mostly because of their inferior graphics cards and my 2 large displays
[02:28:08] <retran> serious though, i got 2 mac minis at datacenter doing macros running itunes
[02:28:16] <retran> to download videos
[02:28:28] <VooDooNOFX> nice, I think.
[02:28:35] <retran> believe it or not, that's the easiest way for hollywood people to get high quality digital versions of their videos
[02:28:37] <GuiPoa> VooDooNOFX, do you work at 10gen?
[02:28:55] <retran> because it's so difficult to get the master video data (always some weird vault somewhere)
[02:29:19] <VooDooNOFX> retran: I would just access em from my SAN (I work for sony pictures)
[02:29:27] <retran> yeah, see
[02:29:34] <retran> you gotta actually work for the distributor
[02:29:38] <retran> to have ready access
[02:29:41] <retran> it's a big issue
[02:29:53] <retran> i work in LA, maybe you do ti
[02:29:54] <retran> to
[02:29:59] <VooDooNOFX> I do, in Culver
[02:30:11] <retran> i'm in miracle mile, place called ActivePitch
[02:30:59] <retran> our competors are Casting Workbooks
[02:31:05] <retran> and maybe some other silly places
[02:31:09] <retran> but that's bigguest one
[02:31:19] <retran> we're SaaS for actors and rep agencies
[02:31:31] <nicken> hmm, facebook is acting funny.
[02:31:40] <nicken> not sure if it's my browser or facebook.
[02:31:51] <VooDooNOFX> retran: looks cool.
[02:32:03] <retran> they store their videos and photos (press kits, resumes) with us, organized by actor, and rep agencies can send out custom "profiles" (mini websites) they create real quick
[02:32:12] <retran> to people in casting
[02:32:14] <VooDooNOFX> nicken: http://downrightnow.com/facebook
[02:32:34] <VooDooNOFX> retran: nice. Much more front-end than my lowly job.
[02:32:39] <retran> actors use it too, dpeending how lazy their reps are
[02:32:54] <nicken> oh, it's up, I'm just not receiving notifications.
[02:33:10] <nicken> I have to refresh to see them.
[02:33:26] <retran> well you work part of big machine
[02:33:37] <retran> that's kind cool
[02:33:48] <VooDooNOFX> nicken: as I was told last weekend by my much younger cousin "You're still using facebook? That's pretty old bro".
[02:34:34] <nicken> so what are the cool kids using these days?
[02:34:37] <nicken> snapchat?
[02:34:48] <retran> for dick and tit pics?
[02:35:02] <retran> only reason to use snapchat
[02:35:17] <VooDooNOFX> aparently, tumblr
[02:35:31] <nicken> ah, right.
[02:35:38] <retran> snapchat works because men think everyone wants to see their privates
[02:35:40] <nicken> well, none of my real life friends use tumblr.
[02:36:09] <nicken> I like the idea of tumblr more than facebook, but nobody I know uses tumblr.
[02:36:20] <retran> i've heard of tumblr for years
[02:36:23] <retran> what's it do
[02:36:33] <nicken> it's a blog basically.
[02:36:37] <retran> what about reddit
[02:36:39] <nicken> er, well, it provides you with a blog.
[02:36:39] <retran> i know that
[02:36:53] <retran> reddit is simple enough.. you post and comment on shit
[02:37:07] <VooDooNOFX> once the queen got a facebook, I was sure it wasn't popular anymore.
[02:37:12] <retran> twitter has become uncoool like facebook now
[02:37:24] <VooDooNOFX> but it keeps reminding me of people's birthdays, so I keep it around.
[02:37:56] <retran> nobodly likes admitting they use faceobok
[02:38:08] <retran> i feel embarassed every time i use it
[02:38:15] <retran> look over my shoulder
[02:39:47] <retran> the project i'm working on now (using mongo as the db) is a SaaS system that lets you cut scenes out of existing videos
[02:40:16] <retran> (it uses ffmpeg workers to do the magic)
[02:40:48] <retran> mongo works great pelting it with 100s of finds/updates... doens't seem to have performance penalty
[02:40:58] <retran> like it would similar to having 100s of sql calls in Mysql
[02:41:22] <retran> my developers are asking me "this is gonna have lots of updates to db, you sure it's ok"
[02:41:32] <retran> but yep, benchmarked it, its ok
[02:44:53] <retran> http://dev.sceneclipper.com/
[02:44:56] <retran> sign up
[02:45:19] <retran> request a movie that i already have in the system... "Raising Arizona"
[02:45:25] <retran> all this crap is mongodb backed
[02:45:55] <retran> i'm about to post a bunch of source code on github public
[02:53:47] <VooDooNOFX> retran: I built a similar-ish tool here at sony that does roughly what that does, plus debayering, reformatting, etc.
[02:54:00] <VooDooNOFX> so seeing something oss come up is nice to see
[02:54:08] <retran> oh cool
[02:54:23] <retran> yeah i have our servers do all the processing (where it goes frame by frame)
[02:54:41] <retran> been field testing it with editors
[02:55:19] <retran> sceneclipper just does really simple thing, just gives you clips from splicing a big video
[02:55:42] <retran> but our server handles all the files and provides highest quality you can generally get
[02:55:44] <VooDooNOFX> retran: if your project can afford it, pick up a Nuke license. It'll do all the heavy lifting for you
[02:56:15] <retran> Nukes for rendering, right?
[02:56:23] <retran> we only have to transcode and splice
[02:56:44] <retran> we take already realeased/finished videos and splice things
[02:57:18] <retran> if you can think of... what a actor or small-time director(of tv epsiodes) might want to do... showcase a few scenes they like
[02:57:29] <retran> without having to put it into a non-linear video editor
[02:58:20] <VooDooNOFX> nuke is a compositor, but it's an extremely powerful renderer also, using openImageIO's codebase to read and write frames
[02:58:36] <retran> cool man
[02:58:51] <retran> when i need to do anything more copmlex than what i have now, will have to investigate
[02:59:00] <retran> i'm basically at the limit of what i can use ffmpeg for practically
[02:59:08] <VooDooNOFX> You could also just use openimageio's image handling libs also.
[02:59:18] <VooDooNOFX> but nuke handles color management pretty well also.
[02:59:25] <retran> cool, good to know
[02:59:32] <retran> and they use GPUs?
[02:59:40] <VooDooNOFX> ffmpeg is ok as a pro-sumer tool, but ffmbc supports more professional codecs
[02:59:44] <retran> i was thinking if this sceneclipper system
[02:59:47] <retran> gets much extra demand
[02:59:59] <retran> i'll need some way to use GPU to transcode more efficiently
[03:00:09] <VooDooNOFX> retran: you're basically starting up a sorensen media with this project.
[03:00:19] <retran> i know i could have routines written in C++ that go straight to nvidea APIs
[03:00:28] <VooDooNOFX> they make millions transcoding movies for blogs and websites.
[03:00:29] <retran> but that's ... you know
[03:00:40] <retran> yeah this is a specialty transcoding
[03:00:49] <retran> just for splicing/editing already finishied works
[03:01:05] <retran> on our production system (ActivePitch) i use Elastic Transcoder
[03:01:12] <retran> for handling videos our customer uploads
[03:01:20] <retran> so their streamable (andon a CDN)
[03:01:41] <retran> so sorenson media.. is like for profession productions
[03:01:52] <retran> outsourcing rendering of big shit?
[03:02:07] <retran> like masters for movies and what not?
[03:02:07] <VooDooNOFX> Sorensen is hosted on EC2, and takes any input and sends back any output formats. (using any very loosly here).
[03:02:14] <retran> oh
[03:02:30] <retran> a transcoder service
[03:02:40] <retran> no more no less?
[03:02:44] <VooDooNOFX> It's not for the studio, it's for portal sites that get large video source and want h264 in 4 qualities
[03:02:51] <retran> gotcha
[03:03:01] <retran> so it competes with.. Elastic Transcoder, Transcode.com, etc
[03:03:08] <VooDooNOFX> it's based on their in house encoder called Sorensen Squeeze, which Nuke can outperform any day of the week.
[03:03:17] <retran> i gotcha
[03:03:25] <retran> so Nuke... it uses GPUs
[03:03:26] <retran> ?
[03:03:29] <VooDooNOFX> since they're all windows/.NET, and some newer teams are using Ruby to manage the EC2 isntance
[03:03:50] <VooDooNOFX> Only for playback.
[03:03:55] <retran> you could for example, have nuke running on a farm of PCs , each PC having a bunch of GPUs?
[03:03:59] <VooDooNOFX> not for rendering
[03:04:00] <retran> oh
[03:04:21] <retran> i was interested is using the newer capabilities i'm seeing in nvidia's APIs for their GPUs
[03:04:27] <retran> transcoding to h264
[03:04:42] <retran> so i can avoid having to use CPU (obviously)
[03:04:52] <retran> that's my main bottleneck
[03:05:01] <retran> aside from uploading to object storage (s3)
[03:05:10] <retran> s3 is slow :(
[03:05:16] <retran> (to create new objects)
[03:06:47] <VooDooNOFX> yeah, s3 is pretty slow
[03:07:45] <retran> so is rackspace cloudfiles
[03:07:50] <retran> i think it's the nature of object storage
[03:08:05] <retran> more or less wrappers for HDFS (or something similar)
[03:10:44] <VooDooNOFX> Well, scalable object storage isn't perfectly taken care of
[03:12:42] <retran> you mean trying to do it yourself?
[03:12:56] <retran> implementing an HDFS?
[03:13:44] <VooDooNOFX> No, existing solutions haven't quite made it perfect
[03:13:55] <VooDooNOFX> They're either slow, or costly.
[03:15:25] <retran> yep
[03:15:50] <retran> we have (had) a huge issue here in house with a raid array
[03:16:06] <retran> they figured out they had to keep the temp below 68F
[03:16:20] <retran> that's so they could get block level storage
[03:16:27] <retran> with all the speed, etc
[03:16:39] <retran> but people in the office couldn't stand the temperature
[03:16:45] <retran> so they kept raissing it
[03:16:47] <retran> or blocking vents
[03:16:54] <retran> and hard drives now failed, lol
[03:17:04] <retran> now they're just living with using s3 and glacier
[03:17:16] <retran> and dealing with the wait
[03:35:59] <nicken> I'm having trouble finding the documentation for Document objects...
[03:36:08] <nicken> specifically for use with the node.js driver.
[03:37:10] <nicken> I'm retrieving a document using db.collection.findOne(), and then I'm just wondering how to save the document after I alter it.
[03:37:36] <nicken> http://mongodb.github.io/node-mongodb-native/index.html
[03:37:41] <nicken> that's the documentation I'm looking at
[03:38:10] <nicken> I'm wondering if I should work with collections as opposed to documents
[03:39:26] <nicken> or if document objects share the same persistance methods as collection objects.
[03:40:17] <GuiPoa> If i use pymongo, when i do some insert, the drivers already uses a call of getLasterror()?
[03:43:22] <retran> dunno did you ask pymongo
[03:43:27] <retran> people?
[03:44:36] <GuiPoa> no, my question could be for any driver.
[03:44:52] <GuiPoa> but #pymongo doesnt exists
[03:45:47] <retran> only used javascript and php interfaces to mongo
[03:55:46] <VooDooNOFX> nicken: you shouldn't be querying for a document, modyfying it in your code, then resaving it. Instead, use $set, with an upsert=True
[03:56:17] <nicken> okay
[03:56:36] <VooDooNOFX> nicken: this avoids the roundtrip from db to code back to db
[03:56:40] <nicken> I am using a javascript library for this.
[03:56:43] <VooDooNOFX> and does it all in the DB
[03:56:49] <nicken> ah, nice
[03:57:21] <nicken> the node.js library I'm using has an update function
[03:57:34] <nicken> which I'm using now
[03:57:38] <VooDooNOFX> yes, it should be db.collection_name.update
[03:58:04] <nicken> so it seems that when I call update, the entire document is replaced
[03:58:10] <nicken> with the new attributes
[03:58:13] <VooDooNOFX> Yes, it is.
[03:58:35] <nicken> okay
[03:58:44] <nicken> another issue I'm having is the script doesn't exit
[03:58:54] <VooDooNOFX> So, instead, you use update({some query}, {$set: {$key1: 'value1'}})
[03:59:05] <nicken> ah, I see
[03:59:16] <VooDooNOFX> which will set key1 to be value1 for the first item to match in the query
[03:59:24] <VooDooNOFX> also, you can say upsert: true
[03:59:37] <VooDooNOFX> which will create the item if it doesn't exist, then set the value accordinglyu
[03:59:57] <VooDooNOFX> and you can also give multi:true, which causes all documents that matched the query to be updated. Not just teh first one.
[04:00:04] <VooDooNOFX> nicken: http://docs.mongodb.org/manual/reference/method/db.collection.update/
[04:00:35] <nicken> the documentation I'm looking at defines update like this: update(selector, document[, options][, callback])
[04:01:11] <nicken> does that matter or can I still use it the way you mentioned?
[04:01:30] <nicken> oh, I see, I think I got a little confused.
[04:01:47] <VooDooNOFX> in your case, callback is a javascript callback which is called if you'd like
[04:02:10] <VooDooNOFX> but options is { upsert: true; multi: true } (or some options like that)
[04:02:22] <retran> not to mention, you could clobber field value updates done in the meantime of your roundtrip
[04:02:28] <retran> that's the bigger sin
[04:02:35] <VooDooNOFX> retran: now you're thinking like a mongo-ian
[04:02:37] <retran> (not using $set)
[04:02:37] <VooDooNOFX> :D
[04:03:12] <VooDooNOFX> documents can change in the short time from query to update. So, do it all in the db, and it'll get a lock, modify it and remove the lock
[04:03:13] <nicken> so, by using set, it doesn't replace the entire document? only sets the specified attributes?
[04:03:17] <nicken> er, $set.
[04:03:23] <VooDooNOFX> nicken: that's correct.
[04:03:31] <nicken> awesome, that will come in handy.
[04:03:39] <VooDooNOFX> http://docs.mongodb.org/manual/reference/operator/update/set/
[04:03:47] <VooDooNOFX> nicken: There's also $push
[04:03:56] <VooDooNOFX> http://docs.mongodb.org/manual/reference/operator/update/push/
[04:04:21] <VooDooNOFX> To add to an existing Array of items (or create it if it didn't already exist if you tell it upsert: true as well)
[04:04:36] <nicken> ah, nice
[04:04:50] <VooDooNOFX> and $inc to just add some number to an existing numeric field of your document
[04:05:04] <VooDooNOFX> like {$inc: {"page_view": 1}}
[04:05:12] <nicken> I'm also wondering, do I need to close the db connection in order for my script to exit?
[04:05:31] <VooDooNOFX> nicken: Not usually. That's generally handled in teh driver, if it's any good at least.
[04:05:56] <nicken> for reasons unknown, my script doesn't exit after I call it.
[04:06:07] <nicken> er, execute it.
[04:06:12] <VooDooNOFX> nicken: here's some quick mongo-ish functions you should know: http://docs.mongodb.org/manual/reference/operator/update-field/
[04:06:36] <nicken> I'll definitely keep those in mind.
[04:06:36] <VooDooNOFX> Check if your driver supports auto-close.
[04:07:25] <nicken> doesn't seem like it, I searched for auto-close and didn't find anything.
[04:07:29] <VooDooNOFX> Anyway, i've spent more time with you guys today than my wife, so i'm going home. Night and GL
[04:07:53] <nicken> see ya
[04:13:57] <retran> bye VooDoo
[04:14:33] <retran> if i see you often here, we should grab beer sometime
[05:23:03] <Garo_> Hello. I'm having issues recovering one replicaset member from a snapshot. After starting mongodb it reports that it starts recovering from journal files, but it will soon stuck with no additional output (check this gist https://gist.github.com/garo/73b16d25da92bc9b184c). stracing the mongodb processes shows two pids stuck in some kind of mutex wait futex(0x1b50264, FUTEX_WAIT_PRIVATE, 1, NULL^C <unfinished ...>, one stuck ...
[05:23:10] <Garo_> ... on an infinitive getcwd("/", 128) = 2 loop and one stuck in this: rt_sigtimedwait([HUP INT USR1 TERM], NULL, NULL, 8^C <unfinished ...>
[05:23:50] <Garo_> any ideas? The snapshot should be consistent as it's done by first freezing an LVM volume, then snapshoting the underlying EBS volumes. Restore is done by lvmerging the snapshot onto the lvm volume.
[05:36:05] <greybrd> hi can I concat two BasicDBObject or append and entire BasicDBObject to an other one?
[05:49:18] <greybrd> can I concat two BasicDBObject or append and entire BasicDBObject to an other one?
[06:09:38] <greybrd> can I concat two BasicDBObject or append and entire BasicDBObject to an other one?
[08:18:19] <AlecTaylor> hi
[08:24:11] <Nodex> Low
[08:27:31] <Zelest> somewhere in between
[08:27:57] <Nodex> Middle
[08:28:17] <Nodex> Launched a new project Zelest :)
[08:28:26] <Zelest> Oh :o
[08:28:37] <Zelest> Does it have webscale? ;)
[08:29:11] <Nodex> haha, yes plenty
[08:29:11] <Nodex> http://www.rentaletting.co.uk/
[08:31:51] <Zelest> I can haz injections!
[08:32:04] <Nodex> PM the page - shuld be all taken care of?
[09:30:42] <noqqe> hi! i have a sharded collection with shardkey _id: "hashed"
[09:31:04] <noqqe> at first the documents ware distributed very properly
[09:31:33] <noqqe> but now tis rs0: 9719057 / rs1: 12193060
[09:31:53] <noqqe> is this normal ? or did i something wrong?
[09:42:30] <kali> noqqe: is your balancer running ?
[09:45:01] <drgnorq> anyone available with experience in sharding where _id is NOT the shardkey?
[09:45:38] <drgnorq> I have some questions about the expected behaviour if you modify the attribute used as shardkey...
[09:52:17] <drgnorq> nobody alive? There must be anyone with a lot of sharding knowledge...
[10:01:36] <noqqe> kali: i have to check, mom
[10:02:08] <noqqe> kali: yes
[10:15:29] <shangrila> Hi, is it possible to determine mongo version via the mongo DB files?
[10:15:45] <Nodex> no
[10:16:57] <shangrila> thank you
[10:17:40] <Zelest> unless you've saved it inside the db! :D *trololo*
[10:39:00] <shangrila> then how mongo determines which version the data is in ?
[10:40:02] <Zelest> has the data structure changed much between the last versions?
[10:44:29] <Nodex> shangrila : mongo data doesn't really need to know the version
[10:44:36] <Garo_> Well this is new: I resized one of my replicaset member oplog size according to this guide http://docs.mongodb.org/manual/tutorial/change-oplog-size/ and ended up with the following crash: https://gist.github.com/garo/c928b9c4a999ab250e5c
[10:49:12] <pinvok3|2> Good day. I try to compile the cpp mongo driver. I successfully compiled the driver and tried to put it into my Qt package. I was able to get rid of the most linking problems, but I still get one "undefined reference to `mongo::DBException::traceIfNeeded(mongo::DBException const&)'" What could be the problem?
[10:59:30] <shangrila> Nodex: thanks
[11:22:05] <noqqe> kali: any further ideas?
[11:26:18] <ilhami> hey
[11:26:22] <ilhami> I get an error
[11:26:53] <ilhami> the constructor MongoClientOptions(MongoClientOptions.Builder) is not visible
[11:29:09] <Nodex> 42
[11:29:36] <ilhami> ???eeeh
[11:31:31] <cheeser> you have to call build() on your MongoClientOptions.Builder reference.
[11:37:42] <ilhami> it works now
[11:37:47] <ilhami> but we had to get the newest java driver
[11:38:21] <ilhami> cheeser, when will you remove my ban?
[11:42:01] <cheeser> ilhami: we're not going to discuss that. certainly not here.
[11:42:42] <ilhami> I will PM you later
[11:42:43] <ilhami> bye
[11:43:05] <Nodex> haha
[11:43:12] <cheeser> i'll ignore it later. like i always do.
[11:43:18] <Nodex> ++
[11:43:45] <cheeser> he went on racist, homophobic rants in the java channel. so he's no longer allowed in.
[11:44:04] <cheeser> but it's ok, see, because he's just espousing his religious beliefs. pffffft.
[11:44:06] <cheeser> anyway.
[11:47:30] <Nodex> ah, muslim by any chance?
[11:53:18] <ddssc> complete mongo noob here. how do we know in mongodb if there are write errors ? how realiable is it? I see people use mongo for ecommerce these days.
[11:53:40] <ddssc> is it safe to keep any sort of transaction data in mdb?
[11:55:20] <kali> noqqe: can you paste a db.collection.printShardingStatus() somewhere ?
[11:56:16] <cheeser> Nodex: how'd you guess?!?
[11:57:07] <Soothsayer> ddssc: while saving to mongo, you can set a flag that causes it to return only after its been saved successfully on disk
[11:57:11] <Soothsayer> So that makes it completely reliable.
[11:59:17] <Nodex> ddssc : if you want garunteed transactions then I would keep that part out of Mongodb
[11:59:35] <Nodex> cheeser : had run ins with these kind of people and their views before
[12:08:54] <AlecTaylor> Are there any plans to maintain the Windows Azure worker project? - https://jira.mongodb.org/browse/AZURE-137
[12:12:07] <noqqe> kali: http://pastebin.com/raw.php?i=ZUtE4T44
[12:16:31] <csrgxtu> hi, how can i store datetime in mongodb
[12:23:24] <Nodex> csrgxtu : however you wish
[12:30:07] <csrgxtu> Nodex, ok, i now get it
[12:30:10] <csrgxtu> thanks
[12:33:42] <kali> noqqe: you have a jumbo chunk
[12:35:23] <kali> noqqe: i've never had to deal with one, but maybe someone here can help
[12:36:16] <noqqe> kali: okay! thats at least i point where i can start reading! thanks!
[12:36:25] <kali> it's weird, because you're sharding on _id... i'm not sure how it can happen
[12:40:00] <ddssc> Nodex: I dont need guaranteed transactions but I'd like to keep a big key/value table with hundreds of millions of entries for reporting purposes. I understand mdb is quite fast when it comes to searching.
[12:41:41] <noqqe> kali: it was some kind of performance test. just writing the same object everytime on my cluster from 16 servers with pymongo scripts in it. they were 8 mio writes within 16minutes
[12:42:29] <noqqe> kali: i only did acknowledged (w=1) on write concern, maybe this is the problem.
[12:50:26] <kali> noqqe: i'm not sure it can make a change there
[12:52:24] <kali> noqqe: just an idea... what version of mongodb are you running ? and you're positive all nodes of your cluster run the same version ?
[12:52:42] <noqqe> kali: 2.4.3
[12:53:02] <noqqe> yes - i just looked into mms :) all the same version
[12:53:39] <noqqe> i have some knowledge gap in jumbo chunks. i have to read some docs at first now :)
[12:54:47] <kali> noqqe: i'm doing the same, because i'm curious how it could happen with a fine-grained key (_id: hashed)
[12:55:38] <kali> noqqe: maybe the high rate of insertion grew a jumbo chunk while the balancer was busy splitting/moving another part of the collection
[12:57:37] <AlecTaylor> .ckear
[12:57:41] <AlecTaylor> Are there any plans to maintain the Windows Azure worker project? - https://jira.mongodb.org/browse/AZURE-137
[13:59:47] <mcr-credil> Is there a way to have .drop(), simply wait rather than saying: "errmsg" : "exception: collection's metadata is undergoing changes. Please try again."
[14:59:38] <tscanausa> is there a preferred option to securing mongos to mongo config and mongod?
[15:24:07] <Soothsayer> I want to track activity stream of a Customer Session on my e-commerce site.. Should I be storing a list of all activities/events performed under One document per Customer Session or one entry per activity/event?
[15:24:29] <srcspider> if you have a tree structure that is very frequently written to and read from and also might branch out for 1000 of nodes in any direction is that just a nightmare to manage using mongodb as opposed to a traditional relational database?
[15:56:05] <tkeith> I have a replica set with 3 members. How can I remove 2 so there's just one server left?
[15:56:23] <tkeith> preferably in a way that I could add them back in later easily
[16:00:57] <skot> Why are you removing them? One option is just to reconfigure to one member (removing both at once).
[16:21:50] <stefuNz> how can i reduce disk I/O?
[16:34:08] <vparham> Curious if there are any mongo on bsd users.
[16:35:49] <rafaelhbarros> stefuNz: very broad question.
[16:36:07] <stefuNz> rafaelhbarros: how can i refine it? :)
[16:36:22] <rafaelhbarros> is mongo using a lot of IOPS?
[16:36:27] <rafaelhbarros> what is that you're doing?
[16:36:35] <rafaelhbarros> a bunch of inserts?
[16:36:57] <rafaelhbarros> is it bandwidth or IOPS that you're trying to reduce?
[17:01:19] <vparham> More specifically, is there anyone who is running an alternate BSD/rc startup script for mongo in prod than what's provided in the port?
[18:23:05] <visually> hello -- i have been experiencing a continuous rise in background flush time as well as a correspodning rise in io wait time which appears to be negatively impacting performance of my app/causing us to drop data
[18:23:21] <visually> is there anything i should know in particular to diagnose the issue
[19:03:09] <proteneer> do reads lock the DB?
[19:03:24] <proteneer> Beginning with version 2.2, MongoDB implements locks on a per-database basis for most read and write operations
[19:03:28] <proteneer> what is "most"?
[19:04:35] <proteneer> nm
[19:04:58] <proteneer> why do queries have read locks?
[19:08:52] <cheeser> writers needs to know there are readers in the mix before locking things up.
[19:09:07] <cheeser> helps the system balance read vs write requests
[19:17:46] <traplin> how would i perform an update, where it adds an object, to an array of objects, if it doesn't exist? i also want to create the array when this runs, if it doesn't exist already
[19:18:57] <cheeser> $addToSet
[19:20:28] <traplin> cheeser: so if i replaced $push with $addToSet, that would work? this is my code http://pastebin.com/L2YWTCkX
[19:21:10] <cheeser> try it and see, but it should, yes.
[19:22:32] <traplin> i get an error, cannot apply $addToSet modifier to non-areay
[19:23:22] <cheeser> on a non-existent field?
[19:24:00] <traplin> well i would like it to create the 'friends' field if it doesn't exist
[19:24:06] <traplin> and then add objects to it
[19:30:45] <cheeser> is friends supposed to be a document or an array of documents?
[19:33:20] <traplin> i would like it to be an array of documents
[19:34:48] <skot> The error is because the field already exists and isn't an array, like if friends = "you" (a string)
[19:35:21] <traplin> so should i delete the field, and then start afresh with $addToSet?\
[19:36:17] <skot> yep, or convert the field to an array first.
[19:36:57] <skot> In 2.6 the error will include the _id of the document it error'd on.
[19:39:21] <traplin> ah slot and cheeser you two are geniuses!
[19:39:26] <traplin> works perfectly now :)
[20:22:07] <traplin> i have another question. so i have two collections: friends and users. is it possible to do a query, that searches both collections, and finds all documents that appear in both?
[20:22:45] <cheeser> not with one query
[20:22:58] <traplin> so would i have to iterate over both collections then?
[20:24:04] <cheeser> at least two queries, yeah
[20:53:44] <NaN> any workaround to do text search?
[20:55:13] <aGuest> anyone have a recommend amount of memory for running mongo on ubnutu server, and that is only thing that server would be running? Database should not reach more than a gig, at least not for the first 2 years
[21:04:20] <proteneer> wait
[21:04:28] <proteneer> anyone use tokutek?
[21:05:44] <proteneer> does it really just work?
[21:09:03] <RenatoFarias> Hi All. Can you provide me some infos?
[21:09:10] <RenatoFarias> Is MongoDB Java driver blocking I/O?
[21:09:15] <RenatoFarias> MongoDB Java driver supports NIO?
[21:17:58] <cheeser> RenatoFarias: yes, it's blocking.
[21:18:08] <cheeser> we're working on an async API for the 3.0 release.
[21:20:01] <RenatoFarias> thanks @cheeser
[22:02:17] <proteneer> is it feasible to shard or do replications in different AWS regions?
[22:02:25] <proteneer> to improve DB query times
[22:03:47] <skot> people do that all the time; but it requires your app being able to handle stale reads, and/or paying the cost for cross-region operations.
[22:04:26] <proteneer> stale reads?
[22:11:12] <skot> replication is not synchronous so reading from any node but the primary might result in a "stale" read.
[22:11:24] <skot> see the docs: http://docs.mongodb.org/manual/applications/replication/
[22:11:42] <skot> http://docs.mongodb.org/manual/core/read-preference/