PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 23rd of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:40:42] <TTimo> hello. Are there conditions under which mongodb will write out queries to mongodb.log even though db.getProfilingLevel() says 0 ?
[00:40:48] <TTimo> seems to be what I'm seeing atm
[00:51:03] <TTimo> also .. I should be able to enable profiling on a secondary, correct? db.system.profile.find() says "not master and slaveOk=false"
[00:56:12] <skot> yes, but you must do rs.slaveOk() to query a secondary.
[01:14:31] <jaha> Anyone got a good way to "clean" an excel or csv to prep it for mongoimport? im getting an exception "CSV file ends while inside quoted field"
[02:18:48] <doxavore> Is there a way to set nofile limits in an upstart .conf file?
[02:18:54] <doxavore> I seem to be running into SERVER-2200
[02:28:15] <samurai2> hi there, anyone know how to dump mongodump --out into remote folder? thanks :)
[02:34:03] <robdodson> has anyone ever tried mongoskin with node? https://github.com/kissjs/node-mongoskin
[02:34:11] <robdodson> curious if it's stable and easy to work with
[02:35:53] <addisonj> its built on mongodb-native, which is 10gen supported
[03:23:38] <speakingcode> hi, got a newb questoin here. i want to build a database where i have a collection of users, each user will have some attribute, say Company. I want to match users by company so in RD terms i would have a company table and make those serve as foreign keys for user records. in mongo, what would be the right approach?
[03:25:16] <speakingcode> i want to be able to basically query all the users with a given company. b/c of performance i don't want to iterate every user and find every match. would it be best to have a second object of companies, each of those containing the user? is there a way to cross-reference and cascade updates?
[03:25:40] <opus_> if i want to insert a row and use the Next ID for an internal id
[03:26:13] <opus_> do I do db.collectionname.insert( { "name" : "bob" , { "$inc" : { internal_id : n }} );
[03:26:13] <opus_> ?
[04:00:03] <robdodson> can I ask a very newb question? Is it possible to prepopulate a db with a json file of some kind? I'm trying to build a little prototype and I'm not sure how to bootstrap the db with data
[04:22:06] <michaeltwofish> robdodson: mongoimport can import json. http://docs.mongodb.org/manual/administration/import-export/
[07:17:06] <Zelest> GridFS seriously rocks.. :o
[07:19:58] <NodeX> :)
[07:20:37] <Zelest> no, i really mean it..
[07:20:55] <Zelest> I have twice as much throughput compared to static files
[07:21:39] <NodeX> you're serving site static files out of gridfs?
[07:22:11] <kali> Zelest: with the nginx module ?
[07:25:39] <opus_> in Mongodb, if I have a nested document, but I don't have an ID, and I want to delete it, how do I create an index
[07:25:43] <opus_> and refer to that index id as the one I want to delete?
[07:26:04] <NodeX> index id?
[07:26:26] <opus_> like if I want to ensureIndex on a nested document, then get an ID for it from a find?
[07:26:29] <opus_> is that how it works?
[07:27:06] <NodeX> i dont understand what "then get an ID for it from a find?" means
[07:28:16] <opus_> I have a nested document, but it doesn't have an ID. I want to give it an ID, then find that ID later, and do something to it
[07:28:37] <opus_> does that make sense?
[07:28:39] <NodeX> so give it an id .. I dont see the problem
[07:28:56] <opus_> I guess my question is, how do I do that?
[07:29:24] <NodeX> db.foo.update({criteria},{$set:{"path.to.nested":id}})
[07:29:58] <opus_> ok, so id is a type and it will understand that?
[07:30:06] <NodeX> no
[07:30:15] <NodeX> id is whatever you want it to be
[07:31:05] <opus_> Reference error is not defined
[07:31:22] <NodeX> that's down to your driver
[07:31:23] <opus_> ahh, but I want mongodb to assign a unique one every time I insert
[07:31:47] <NodeX> that's appside code
[07:31:50] <opus_> can I do that?
[07:32:01] <NodeX> in your app yes
[07:32:01] <opus_> so generate something like a UUID and shove it in there?
[07:32:14] <opus_> hmm, i was hoping for something on the server side
[07:32:33] <NodeX> well mongo is schemaless and has no concept of defaults for fields
[07:32:59] <NodeX> from the little i do know about mongoose you setup a schema (still baffles me why they do this in a schemaless datastore)
[07:33:13] <NodeX> in your schema / mapper thing you add it in there
[07:33:48] <NodeX> #mongoosejs <--- maybe they're more equipped to help on all things mongoos
[07:33:50] <NodeX> +e
[07:34:07] <opus_> Yeah I can't use mongoose
[07:34:41] <opus_> because I started with a collection and can't define a schema after already having the collection. I guess it was in a previous version of mongoose to 'learn' the schema, but they took that functionality out
[07:34:56] <[AD]Turbo> hola
[07:35:04] <opus_> hey
[07:35:31] <NodeX> surely you can override that opus_ ?
[07:35:42] <NodeX> if not that's some really stupid behaviour on the part of the driver
[07:35:48] <NodeX> mapper*
[07:38:46] <opus_> i'm already to far into the db code.. i tried for a few days,
[07:38:53] <opus_> is there something called mapper?
[07:40:46] <NodeX> from what I gather mongoose is a mapper (ODM)
[07:40:53] <NodeX> Object Document Mapper
[07:42:01] <opus_> everything i've read is that you need to create the map first then use it
[07:42:42] <opus_> anyway, before I insert into a row, should I just create a new column and do newRow.id = getObjectId() ; ?
[07:44:15] <NodeX> I suppose
[07:46:57] <Zelest> there we go.. damn phone..
[07:47:05] <Zelest> i use nginx and php-fpm
[07:47:16] <Zelest> havn't done any benchmarks on the gridfs module yet
[07:48:01] <Zelest> but how can gridfs be faster than regular files? i mean, fine, mongodb/gridfs is all in memory.. but so is the regular files after being read once..
[07:48:23] <Zelest> i'm curious how "well" gridfs performs once the data isn't in memory though.
[07:48:36] <Zelest> not that I know how to test that..
[07:48:56] <opus_> can I change ObjectID to a Number and have it increment somehow?
[07:55:41] <NodeX> opus_ : that's appside
[07:56:03] <NodeX> Zelest : I think you'll find it's not faster when you get down to it
[07:59:03] <Zelest> NodeX, well, I put file.jpg in the webroot.. put the same file in mongodb/gridfs and had a php script that reads the file and spits it out..
[07:59:27] <Zelest> then I ran siege, a benchmarking tool requesting with 10 parallell connections for 30 seconds
[07:59:42] <Zelest> and I get almost twice as many requests using gridfs than the regular file.
[07:59:47] <Zelest> which makes no sense
[08:04:03] <NodeX> of course it's faster than php lol
[08:04:13] <NodeX> you implied it was faster than the operating system managing the files
[08:04:35] <Zelest> you read that wrong
[08:04:38] <Zelest> gridfs is faster
[08:04:41] <Zelest> php is faster
[08:04:45] <Zelest> which makes no sense :P
[08:06:05] <NodeX> you're connecting to gridfs with php ?
[08:06:10] <NodeX> or with nginx?
[08:06:13] <Zelest> php
[08:06:21] <Zelest> nginx + php-fpm
[08:06:55] <NodeX> again - of course it's faster than plain php reading the file
[08:07:09] <Zelest> huh
[08:07:17] <NodeX> you're opening the file with PHP yes?
[08:07:32] <Zelest> php uses mongo and gridfs
[08:07:34] <Zelest> vs
[08:07:39] <Zelest> the image.jpg itself.. in the webroot
[08:07:49] <NodeX> " and had a php script that reads the file and spits it out"
[08:07:56] <NodeX> that's what you said
[08:08:06] <Zelest> from gridfs yeah ;)
[08:08:34] <NodeX> and you're saying that php + mongo is faster than Nginx reading the static file?
[08:08:43] <Zelest> yes
[08:08:47] <NodeX> your tests are wrong lol
[08:08:59] <Zelest> hehe
[08:09:20] <NodeX> I have been using mongo / nginx / php for years and I have no once seen php out perform nginx on static file reading
[08:09:52] <NodeX> make sure php / nginx is not giving a 502 / error when running the benchmark
[08:09:54] <Zelest> the file is 1.5MB (a fairly big image) .. but yeah, I do think you're right
[08:09:58] <Zelest> yeah
[08:09:59] <Zelest> it's not
[08:10:09] <Zelest> php-fpm is set to static forking
[08:10:16] <Zelest> all requests reply 200
[08:10:20] <NodeX> it wouldn't make a differnce
[08:10:33] <NodeX> nginx can easily do 10k req/s on static files
[08:10:56] <NodeX> you'll be lucky to get php to do 3500 req/s
[08:11:10] <Zelest> gah, once i get my damn ssh working I'll post my setup/scripts. :)
[08:11:23] <Zelest> vmware didn't like jumping between networks :P
[08:13:43] <Zelest> http://pastie.org/private/jzz5ivxogkqlgfhdzpwvna
[08:14:06] <Zelest> the first one is gridfs.. the other one is the static file.
[08:15:26] <Zelest> http://pastie.org/private/i14ifvuhv4t4xlmks8hwg .. that's my nginx setup
[08:15:48] <Zelest> I'm not arguing, I'm just curious where my tests go wrong. :-)
[08:17:26] <Zelest> ah, it seems like it's the size of the file.
[08:17:37] <Zelest> did the same test with a few kb text file
[08:18:03] <Zelest> 13801.50 trans/sec with static files and 2796.79 trans/sec through gridfs
[08:19:40] <NodeX> what specs is that server?
[08:20:34] <NodeX> http://pastie.org/5102443 <---- that's 2 x quad core Xeon with 128gb of ram
[08:20:35] <Zelest> a virtual machine, vmware fusion running on my macbook pro..
[08:20:39] <Zelest> 2.3ghz, 2gb of ram
[08:20:45] <Zelest> (the virtual machine that is)
[08:21:01] <NodeX> so you can see that somehting is very wrong
[08:21:39] <Zelest> you used the exact same setup?
[08:21:47] <NodeX> it goes in and out of the net card but it has a 10gbE pipe so it should not be a problem
[08:21:55] <NodeX> no I just got the file as a static fie
[08:21:56] <NodeX> file*
[08:22:49] <Zelest> hell, siege even logs every request
[08:22:52] <Zelest> how can it be wrong?
[08:23:25] <opus_> Hello, when I try to do an Update and the "${unset}" on a nested document, I still get [ null ]
[08:23:27] <NodeX> it can't be right on that server is all I'm saying
[08:23:44] <Zelest> try do the benchmark against localhost instead of the nic
[08:23:46] <opus_> is there a way to compeletely remove it, not set null? Just delete that one nested document
[08:24:07] <Zelest> also, what OS do you run? and what version of mongodb
[08:24:09] <NodeX> I cba to adapt a live server for that sorry
[08:24:20] <Zelest> aah, understandable
[08:24:45] <NodeX> 2.2, debian
[08:24:48] <NodeX> (latest)
[08:24:56] <Zelest> still, 128GB of ram and you get 140 req/sec a sec getting a static file?
[08:25:00] <Zelest> or was that php?
[08:25:06] <opus_> update ( { var : val , subvar : { "$elemMatch" : { oid = objectid } } , { "$unset" : { $subvar.* : 1 }}} )
[08:25:09] <opus_> that doesn't work :(
[08:25:15] <opus_> subvar gets set to "null"
[08:25:21] <NodeX> static nginx
[08:25:33] <Zelest> how big is the file?
[08:26:01] <NodeX> 8k
[08:26:12] <Zelest> you should be able to get way over 10k reqs/sec fetching a 8k static file using nginx.. o_O
[08:26:18] <NodeX> http://pastie.org/5102456 <--- in apache benchmark
[08:26:21] <Zelest> you used the same flags for siege I assume?
[08:26:25] <NodeX> 44k req/s
[08:26:37] <Jt_-> hi, i need to use --keyFile with a password that contain char like '#' and mongo say invalid char
[08:26:46] <Zelest> yeah, that's way more like it
[08:26:47] <Jt_-> someone can help me?
[08:26:58] <NodeX> I have a feeling that siege is reporting or operating wrong
[08:27:07] <Zelest> what flags did you use?
[08:27:20] <Zelest> make sure you use -c
[08:27:26] <Zelest> otherwise it sleeps between requsts
[08:27:28] <NodeX> I used the same you used
[08:27:32] <Zelest> ah
[08:27:37] <Zelest> wierd
[08:27:53] <Zelest> but yeah, I'm with you.. with small files.. static is way faster.. even for me.. 13k vs 2.5k..
[08:28:02] <Zelest> but on my 1.5MB image, gridfs was faster
[08:28:05] <Zelest> was/is
[08:28:15] <NodeX> apache benchmark it
[08:28:24] <Zelest> which makes me wonder, where did I screw up my nginx setup
[08:28:44] <Zelest> apache benchmark works shitty in freebsd.. but sure, let me install apache :P
[08:28:50] <NodeX> I dont think it is, I think it's siege is reporting or operating wrong
[08:29:06] <NodeX> you dont have to install apache for "ab"
[08:29:16] <Zelest> where/how do I get ab?
[08:29:45] <NodeX> apache2-tools iirc
[08:29:50] <NodeX> might be apache2-utils
[08:30:32] <IAD> Zelest: use 100k files and ask random files
[08:30:59] <Zelest> IAD, yeah, gridfs can't possibly be faster in the real world.. i'm well awake of it. :-)
[08:31:04] <NodeX> I dunt know about OSx because Mac's suck
[08:31:10] <Zelest> NodeX, can't find such port in freebsd. :/
[08:31:14] <NodeX> :/
[08:31:28] <Zelest> no worries, apache might be good to have... :P
[08:31:50] <IAD> Zelest: question is "It is in memory?"
[08:34:01] <Zelest> NodeX, what kind of server was that btw? *impressed by the amount of memory*
[08:35:14] <NodeX> our own colocated
[08:35:21] <NodeX> (HP)
[08:35:25] <Zelest> ah
[08:36:39] <Zelest> 17.1k (static) vs 3.2k (gridfs) .. using a 77 byte file.
[08:38:18] <IAD> fs will be very slow after a million files ... =)
[08:38:44] <OpenGG> Guys I met a problem: I had 3 collections, but after dropping one of them, db.stats() still shows "collections" : 3, and "fileSize" won't get reduced;
[08:38:47] <Zelest> 49.98 (static) vs 102 (gridfs) .. using a 1.4MB file.
[08:39:34] <OpenGG> db.repairDatabase() gives me "errmsg" : "exception: can't map file memory - mongo requires 64 bit build for larger datasets", "code" : 10084,
[08:39:49] <OpenGG> "fileSize" : 2197815296
[08:40:03] <Zelest> NodeX, ab give me the same figures.. nginx seems "slow" at serving bigger files.
[08:41:12] <Zelest> or nevermind, it doesn't gzip those at all.
[08:41:39] <NodeX> OpenGG : you're still on 32bit?
[08:42:59] <IAD> Zelest: you test a single file via "ab"?
[08:43:07] <Zelest> mhm
[08:43:16] <Zelest> with -k (keep-alive) static wins, of course.
[08:43:47] <IAD> Zelest: so this file is in cache ...
[08:43:54] <IAD> in fs cache
[08:44:13] <Zelest> yeah
[08:44:29] <Zelest> meaning, both files are in memory really..
[08:44:48] <Zelest> yet gridfs wins unless I use keep-alive (as I assume the worker caches the file itself)
[08:47:19] <OpenGG> NodeX: so, do I need to switch to 64bit system to do db.repairDatabase()?
[08:47:58] <OpenGG> NodeX: Is there a way to shrink the file size on 32Bit system?
[09:50:36] <dorong> hello
[09:50:43] <ron> shalom doron!
[09:50:48] <dorong> shalom shalom
[09:51:31] <NodeX> happy independance day
[09:51:36] <dorong> Is the counter of flushes and total ms, counted from server start or for the entire lifetime of the installation of mongo ?
[09:51:51] <ron> NodeX: shuttup
[09:51:55] <NodeX> no
[09:52:00] <ron> yes.
[09:52:02] <NodeX> simple ;)
[09:52:10] <ron> bite me.
[09:52:23] <NodeX> I wouldn't want to catch somehting ;)
[09:52:42] <ron> you won't catch anything worse than what you already have.
[09:52:59] <NodeX> that's a lie and you know it
[09:53:18] <ron> that's a lie and YOU know it
[09:53:29] <ron> now answer dorong
[09:53:44] <NodeX> die()
[09:53:59] <ron> zombie()
[09:55:28] <kkszysiu_work> Hello
[09:56:06] <Zelest> anyone happen to know when MongoDB 2.2 is released for FreeBSD? :/
[09:56:10] <ron> dorong: sorry for that. I don't know the answer.
[09:56:24] <Zelest> the port maintainer seems fairly slow at keeping the ports up-to-date. :/
[09:56:52] <ron> Zelest: there's a simple solution to that. I can even think of two solutions.
[09:57:02] <dorong> surely somebody here knows the answer ?
[09:57:04] <NodeX> here we go
[09:57:12] <NodeX> rons "I know best" answers
[09:57:21] <NodeX> bring back MacYet
[09:57:26] <ron> kali probably knows the answer.
[09:57:40] <ron> NodeX: oh, go play with your php and shut up ;)
[09:57:45] <NodeX> lmao
[09:58:09] <Zelest> ron, Yes, I can port it myself.. or compile from source.. either way beats the idea of using a ports system where I don't have to worry about being outdated. ;-)
[09:58:39] <ron> Zelest: that's one solution. the other solution is to use a more common OS and not have to worry about port maintenance ;)
[09:58:47] <NodeX> I like to take an hour out of my day to annoy you ron
[09:58:53] <NodeX> and it started 3 minutes ago
[09:58:54] <ron> not that there's anything wrong with FreeBSD.
[09:58:55] <kali> dorong: i have no idea... just bounce a server a tell me :)
[09:59:03] <Zelest> ron, What do you suggest? Debian/Ubuntu tends to hate me.
[09:59:09] <kali> dorong: ^ and
[09:59:13] <ron> Zelest: Centos.
[09:59:15] <ron> kali: thanks :)
[09:59:33] <Zelest> Worth a shot.
[09:59:46] <ron> any linux distribution really.
[09:59:47] <NodeX> I've never had troubles with debian + mongo
[09:59:56] <kali> debian is love
[10:00:07] <Zelest> debian just tends to hate me.. :(
[10:00:15] <ron> Zelest: as you can see, linux is fairly favorable around here.
[10:00:27] <Zelest> lol
[10:00:38] <Zelest> ron, yeah, Linux is great, no argue there. :-)
[10:00:47] <ron> FreeBSD has some nice things from what I hear, don't get me wrong, but as you can see, it has these kind of issues.
[10:00:52] <kali> NodeX: already happened yesterday
[10:00:53] <NodeX> tbh as long as you dont do anythign stupid like I dunno code in java, you'll be fine
[10:00:55] <NodeX> lmao
[10:01:03] <Zelest> packages just tend to b0rk on me.. i want a certain version (like now) and the package is only available in uber-super-scary-testing branches..
[10:01:09] <NodeX> kali : to you ?
[10:01:32] <kali> NodeX: yep. i had two config servers on the same AZ, on EBS volumes
[10:01:39] <NodeX> dang :/
[10:02:01] <NodeX> I didn't see any ranting to I assume you got it back online okay?
[10:02:07] <Zelest> Might run mongodb in debian and the nginx/php in freebsd. :-D
[10:02:21] <kali> NodeX: 4hours down yesterday evening
[10:02:27] <ron> Zelest: oh sure, and those packages maintained by some who-knows-guy for freebsd.. that's much better ;)
[10:02:28] <NodeX> arf :(
[10:02:42] <kali> NodeX: mostly because i broke the 0th rule: don't do anything when you don't understand what's wrong
[10:02:55] <NodeX> nginx is fine in debian too, just make sure it's a source compile
[10:03:06] <NodeX> kali : I always break that rule
[10:03:11] <Zelest> the biggest drawback with linux really is iptables.. :(
[10:03:13] <Zelest> the horror
[10:03:15] <NodeX> I start restarting stuff LOL
[10:03:23] <NodeX> (when things break)
[10:03:34] <Zelest> NodeX, Windows-style? ;-)
[10:03:53] <kali> NodeX: this is exactly what i did yesterday... and it's the one thing not to do when you're down to one config database
[10:03:59] <NodeX> no, I leave the box restart till last seeing as we have a KVM thing which takes ages to login to
[10:04:00] <kali> s/database/server/
[10:04:09] <NodeX> nasty
[10:04:19] <NodeX> was it the app or amazon?
[10:04:58] <Zelest> Who maintains the documentation for the PHP-driver btw?
[10:05:06] <NodeX> Derick
[10:05:11] <NodeX> and 2 other guys
[10:05:15] <Zelest> Ah
[10:05:36] <Zelest> The I poke Derick the next time he's on or on twitter. :-)
[10:05:37] <NodeX> but he keeps forgetting things so he's offmy xmas card list
[10:05:45] <Zelest> lol
[10:06:14] <Zelest> I just have some feedback regarding the gridfs docs.. they're quite tricky to understand. :-P
[10:06:17] <NodeX> mind you saying that about debian, about every 6 weeks my server decides it doesn't want me to login to SSH for NO reason, the whole thing then needs a full cold restart and fsck, it's very strange
[10:06:35] <Zelest> o_O
[10:06:41] <NodeX> Yer, the php docs in general are ambiguos
[10:06:53] <Zelest> php itselfs is a bit dodgy.. ;-)
[10:06:56] <NodeX> or however one spells that word
[10:07:13] <NodeX> I found it easier to write a wrapper around the driver
[10:07:24] <Zelest> but yeah, the gridfs docs for example have $grid->storeFile(filename, metadata) ..
[10:07:32] <Zelest> where i thought filename was the name I want to save the file as..
[10:07:40] <NodeX> no, it's the filename on the FS lol
[10:07:41] <Zelest> but no, it's the path to the file on disk to "copy" to db.
[10:07:44] <Zelest> mhm
[10:07:56] <NodeX> metadata is where you store all that sort of stuff
[10:07:57] <Zelest> took me a few turns to figure that out. :)
[10:08:00] <Zelest> yeah
[10:08:29] <Zelest> ugh
[10:08:32] <Zelest> i don't wanna work today
[10:08:36] <Zelest> I want to code on my own stuff :(
[10:08:41] <NodeX> I put all that in my wrapper and a cache time too, so I can grab files out of the cache (redis atm) for a period of time
[10:08:54] <Zelest> ah
[10:09:21] <Zelest> performance is important, no argue.. but the main reason I want to use gridfs is the failover sexyness in mongo
[10:09:38] <Zelest> a few nginx/php/mongo "nodes" ..
[10:09:44] <NodeX> it is pretty sweet for durability
[10:09:48] <Zelest> easy to scale up (seeing I will "only" have reads)
[10:10:02] <Zelest> and it's very failsafe and highly available
[10:10:18] <Zelest> if writes needs scalability, just setup a sharded environment of it all.
[10:10:24] <NodeX> scale up is a misleading term in these scenarios, "scale out" is probably more better
[10:10:31] <Zelest> true
[10:10:48] <NodeX> someone once said in here, I think it was kali, if you're going to shard then do it early on
[10:10:59] <Zelest> ah
[10:11:02] <NodeX> they said it was the biggest mistake they didn't make or somehting
[10:11:18] <Zelest> well, i've been asked to code a webshop for a local store here.
[10:11:35] <Zelest> so my plan is to code a platform for it instead..
[10:11:38] <NodeX> ecommerce type app?
[10:11:39] <Zelest> so I easily can add another store later on
[10:11:40] <Zelest> mhm
[10:11:50] <NodeX> careful with transactions
[10:11:52] <NodeX> or lack of them
[10:11:59] <Zelest> mhm
[10:12:20] <Zelest> well, all important writes will have fsync and such enforced
[10:12:47] <NodeX> I wrote my payment platform with mongo and it went okay but I dont need to say subtract from one and give to another etc
[10:13:00] <Zelest> mhm
[10:13:44] <Zelest> the only thing (apart from transactions then, which i very rarely use) i lack in mongodb is a good fulltext search feature
[10:13:54] <NodeX> I also realised early on that all payment providers hae a callback of somesort and http isn't stateful or transactional and if providers can do it with callbacks why can't my app - which worked out well
[10:13:58] <Zelest> i sort of fell in love with postgres tsearch2.. it's insanely powerful
[10:14:14] <Zelest> yeah
[10:14:14] <NodeX> check out SOLR / Elastic Search
[10:14:25] <Zelest> mhm
[10:14:37] <NodeX> www.jobbasket.co.uk <--- mongo backed, SOLR powered search
[10:14:55] <Zelest> aah
[10:14:56] <Zelest> cool
[10:15:16] <Zelest> sket = "took a shit" in swedish
[10:15:16] <NodeX> 10gen did say that FTM would arrive in mongo at some point
[10:15:18] <Zelest> jobba = work
[10:15:19] <Zelest> :D
[10:15:25] <NodeX> hahahahah
[10:15:33] <NodeX> that's awesome
[10:15:47] <Zelest> but once I saw the colors, job basket made more sense. :)
[10:16:05] <NodeX> I hope so
[10:16:11] <NodeX> we dont support 2girls1cup
[10:17:22] <Zelest> don't ask me how i know.. but that movie is actually called "hungry bitches" .. :-D
[10:17:26] <Zelest> mfx media productions :D
[10:17:29] <NodeX> yuck
[10:17:31] <Zelest> haha
[10:17:47] <Zelest> ugh
[10:17:52] <Zelest> why oh why do we use mysql at work :(
[10:18:03] <NodeX> because it's easy
[10:18:10] <Zelest> so is postgresql
[10:18:33] <Zelest> and postgresql is a lot more functional and takes more care about my data
[10:18:47] <Zelest> mysql throws it in a basket and hopes for the best :/
[10:19:06] <NodeX> postgres has the advantage of not being the number 1 SQL platform so it can learn from mysql mistakes
[10:19:16] <Zelest> i guess
[10:19:42] <NodeX> that said, I am sure at one point mysql was a good database, before it got hacked to death for ease of use
[10:19:43] <Zelest> i do like the mysql replication though.. again, because it's easy..
[10:19:53] <Zelest> mhm
[10:19:59] <NodeX> which I hope does not happen to Mongo as it becomes more popular
[10:20:13] <Zelest> i really need to learn more about the aggregation framwork
[10:20:25] <Zelest> since that's where i think mongo "lacks" the most.. or well, my skills with mongo lacks the most.
[10:20:38] <Zelest> group, distinct, sum/count/avg, etc
[10:20:44] <NodeX> I wrote replication in mysql about 7 years or so ago using the mongos model of a config server and other servers asking it for data
[10:21:02] <Zelest> yeah, that's how we use it atm
[10:21:19] <NodeX> I had 3 masters synched around the country with (near) realtime data to each of those
[10:21:22] <ron> NodeX: omg, you're so kool!!!!!1111
[10:21:36] <NodeX> thanks ron, that means alot :)
[10:21:41] <ron> ;)
[10:22:37] <NodeX> I even mimicked 24's "send it to my screen" for contact record management and opening across a WAN in a web based CRM which was pretty coo
[10:23:27] <NodeX> jealousy will get you nowhere
[10:34:40] <Paizo> \o hi
[10:36:11] <Paizo> Do you know when the version 1.3.0 of the php driver is going to be released? I can't find this info in the group or in jira
[10:36:24] <NodeX> full version?
[10:37:56] <NodeX> https://github.com/mongodb/mongo-php-driver
[10:38:02] <NodeX> I assume watch the commits and see
[10:41:36] <IAD> NodeX: pear install mongo
[10:42:48] <IAD> *pecl
[10:44:49] <NodeX> IAD?
[10:44:58] <IAD> it's 2 Paizo
[10:52:43] <Paizo> kk that development version, i can't use it on a production enviroment. I would like to schedule the mongodb 2.2 update togheter with the php driver when it's officially out
[10:54:43] <NodeX> I use it fine in production tbh
[11:47:02] <remonvv> Which means the real question is, do we trust NodeX' as an engineer?
[11:47:10] <remonvv> And I think we all know the answer to that question, don't we.
[11:48:32] <NodeX> is this attack me day or something
[11:48:35] <NodeX> perhaps I missed the memo
[11:48:44] <remonvv> No no no, we don't have specific days for that.
[11:48:52] <NodeX> this is a bonus day?
[11:49:03] <remonvv> Well see it more as if it's a permanent thing.
[11:49:11] <remonvv> Much like time passing and me being awesome.
[11:49:23] <remonvv> And modest, let's not forget modest.
[11:50:12] <NodeX> modesty is the best policy
[11:51:15] <remonvv> I kid kind sir.
[11:51:31] <remonvv> Ha, someone endorsed me on LinkedIn for the skill "Receiving Endorsements"
[11:51:35] <remonvv> I know that would take off.
[11:51:38] <remonvv> knew*
[11:52:27] <NodeX> hahah
[11:52:38] <NodeX> I'll join linkd in one of these days
[11:52:44] <remonvv> It's pretty cool.
[11:52:56] <remonvv> I don't have FB but I do have LI and Twitter
[11:52:58] <NodeX> I keep getting spam form them or rather someone pretending to be them
[11:53:05] <NodeX> from*
[11:53:13] <remonvv> Why wouldn't you be on LI anyway?
[11:53:23] <remonvv> It's actually the most decent social network
[11:53:32] <NodeX> that;s a really good point
[11:54:00] <remonvv> Well, let me know when you join
[11:54:05] <NodeX> I like google plus, I use facebook for keeping in contact with my old friends
[11:54:20] <NodeX> I seldom use twitter except for posting jobs
[11:54:26] <NodeX> I'll join this afternoon
[11:54:46] <NodeX> then we can bash ron and get skill points
[11:55:45] <remonvv> Okay!
[11:56:21] <NodeX> tbh it's pretty silly that I'm not on there seeing that I own a job board and most of the worlds HR people are on there
[11:56:50] <remonvv> Indeed, indeed it is.
[11:59:35] <remonvv> I have ron on LI actually
[11:59:56] <remonvv> He has yet to endorse me for my skills though. I feel wronged somehow.
[12:02:11] <NodeX> Ok I joined
[12:12:12] <remonvv> http://www.linkedin.com/in/remonvanvliet
[12:13:28] <remonvv> ppetermann, I see you..
[12:13:48] <ppetermann> remonvv: =)
[12:14:02] <ppetermann> interesting, ex machina did a project for one of my ex employers
[12:14:11] <remonvv> That actually sounds like an interesting product, by the way
[12:14:12] <remonvv> Oh?
[12:14:13] <ppetermann> but that project was long after i left
[12:14:33] <remonvv> If it was a long time ago it was probably before our reboot
[12:14:38] <ppetermann> remonvv: you guys did something for viva.tv
[12:15:10] <ppetermann> nah its on your current webpage, assuming exmachinagames is yours ;)
[12:15:12] <ron> remonvv: oh, I was supposed to endorse you for something?
[12:15:14] <ron> :)
[12:15:28] <NodeX> connect thing senty
[12:15:30] <NodeX> sent *
[12:15:32] <ppetermann> remonvv: http://www.exmachinagames.com/2012/05/viva-tv-clipquiz/
[12:15:35] <remonvv> Haha, look at Nodex!
[12:15:43] <remonvv> "1 Connection"
[12:15:46] <ron> oh sure, send him an invite but don't send me.
[12:15:47] <ppetermann> i used to work for viva aloooooot of years ago, before they where bought by viacom/mtv
[12:15:49] <NodeX> I'm the male lol
[12:16:05] <remonvv> ppetermann, oh right, that's still going. That's pretty recent.
[12:16:11] <NodeX> that's my very drunk girlfriend last year
[12:16:29] <remonvv> Oh I was laughing at the 1 connection bit, not your picture.
[12:16:31] <ppetermann> remonvv: also, you being dutch means you might remember thebox?
[12:16:33] <remonvv> You're handsome
[12:16:40] <NodeX> thanks
[12:16:43] <remonvv> ppetermann, I do.
[12:16:47] <NodeX> I'm gonna stalk ron now
[12:16:58] <ron> \o/
[12:17:04] <NodeX> hmmz, "ron" is not alot to go on when searching for someone
[12:17:05] <ppetermann> remonvv: i worked on that aswell, thebox was viva daughter back then
[12:17:06] <remonvv> I'm straight, I just like to make people feel uncomfortable.
[12:17:28] <remonvv> Aha, well, before TheBox even aired here I made the chatroom for TMF
[12:17:29] <ron> NodeX: no kidding. but how many "ron"s does remonvv has in his connections?
[12:17:30] <NodeX> I dont feel uncomfortable lol, I know I am good looking
[12:17:38] <remonvv> The working one, not the free piece of shit they replaced it with.
[12:17:40] <NodeX> ah good point
[12:18:20] <remonvv> I hope I wont get fired for accepting 3 PHP people to my LI.
[12:18:23] <NodeX> he doesnt let users browse connections so that's a bust
[12:18:23] <remonvv> You know, it's a risk.
[12:18:40] <ppetermann> haha tmf headquarters where a stone throw away from the box.. i was there for an integration meeting when viacom bought viva
[12:18:41] <remonvv> Company policy sir. Competitive business ;)
[12:18:58] <remonvv> I was on TV once, on TMF Club
[12:19:07] <remonvv> Well, the back of my head was.
[12:19:14] <ppetermann> hehe, i've been on tv so many times i lost count =)
[12:19:15] <NodeX> haha
[12:19:23] <remonvv> Can you all please endorse me for my skill "Receiving Endorsements". I'm making it a thing.
[12:19:28] <ppetermann> lol
[12:19:36] <NodeX> how do i do that ?
[12:19:55] <remonvv> Go to my profile, go to skills, click on endorse
[12:20:21] <ppetermann> i need people to endorse me for my other skills.. i hate people allways only recognize me for php
[12:20:22] <remonvv> you have to type it if it doesn't auto suggest
[12:20:29] <remonvv> "Receiving Endorsements"
[12:20:31] <remonvv> like that
[12:21:05] <remonvv> It's my social comment on the most bullshitty feature of LI
[12:21:19] <ron> remonvv: what would you like endorsed? I can endorse you for being a sex idiol if you want.
[12:21:23] <ron> idol
[12:21:36] <ppetermann> doesnt let me add it
[12:21:39] <NodeX> nor me
[12:21:55] <NodeX> when you click on it, it goes to another paeg
[12:21:57] <NodeX> page*
[12:22:17] <ppetermann> ah now it works =)
[12:22:21] <remonvv> dont click, it says "Or type other skills here"
[12:22:25] <ppetermann> its in his list now
[12:22:33] <ppetermann> you can simply click on it in the list
[12:22:35] <remonvv> ron, I don't think my boss would much appreciate that one really
[12:22:40] <remonvv> Oh or that, I don't know.
[12:22:48] <remonvv> If you need something endorsed let me know
[12:23:14] <ppetermann> prefer having people to endorse me when they know what i can do.. for you i know you can get endorsements =)
[12:24:04] <remonvv> That's what it is supposed to do but I'm getting endorsements for "Database Engineering" for a marketing person at a big media company and stuff like that.
[12:24:14] <remonvv> It's sort of a popular vote sort of thing because it gets autosuggested.
[12:24:26] <remonvv> Hence my "Receiving Endorsements" skill.
[12:24:31] <remonvv> Which, worryingly, is climbing to the top.
[12:24:55] <ppetermann> well the positive thing about endorsements is that you can see who endorses what
[12:25:34] <ron> NodeX: glad you managed.
[12:25:43] <NodeX> finaly found you
[12:25:43] <remonvv> true, but i don't have the time to remove endorsements that don't make sense
[12:26:10] <ron> should I endorse you for begging for endorsements?
[12:26:12] <ppetermann> haha, i should start getting people to endorse my skills in online games =)
[12:26:21] <remonvv> "Wow Questing"
[12:26:24] <NodeX> lol
[12:26:39] <NodeX> trolling
[12:26:45] <ppetermann> trolling!
[12:26:47] <ppetermann> i'd take that one
[12:26:55] <ron> okay, time to go do some work.
[12:28:07] <NodeX> dude you start work very late
[12:28:26] <ppetermann> depends on where hes from
[12:29:28] <NodeX> israel
[12:29:36] <NodeX> utc+1 iirc
[12:30:32] <ppetermann> then yeah, its late
[12:31:10] <NodeX> lazy people!
[12:33:08] <remonvv> Who's Igor Dolgov here?
[12:35:37] <ppetermann> as it seems no one =)
[12:36:05] <remonvv> Someone with that name just connected in LI. Good to have the handle<-> name combo ;)
[12:37:50] <ppetermann> the reason why i use my rlname here
[12:39:11] <doxavore> Are there some normal causes of this message in 2.0.7? warning: virtual size (2195998MB) - mapped size (2190772MB) is large (4255MB). could indicate a memory leak
[12:39:59] <remonvv> same
[12:40:38] <remonvv> doxavore, what environment? that's 2 TB of virtual memory
[12:43:17] <NodeX> holy cow
[12:44:58] <doxavore> remonvv: ubuntu 12.04 x64, all updates applied
[12:45:14] <doxavore> NUMA disabled (interleaved=all)
[12:45:19] <remonvv> and mongostat also reports a high vsize?
[12:45:38] <doxavore> yep, 2TB
[12:45:54] <doxavore> 1TB mapped, 2TB vsize
[12:46:02] <remonvv> Hm, well I suppose it's technically possible. How long has that node been up?
[12:46:13] <doxavore> 30 mins :)
[12:46:32] <doxavore> though this has been happening for a couple days AFAICS
[12:46:47] <remonvv> Weird, any degraded performance?
[12:47:00] <doxavore> quite
[12:47:15] <remonvv> How large is the dataset for that node?
[12:47:16] <remonvv> db.stats()
[12:48:38] <doxavore> about 650GB
[13:05:43] <doxavore> Anyone know much about the ruby driver? In addition to the above MongoDB messages, I'm starting to get pool.ping_time==nil at: https://github.com/mongodb/mongo-ruby-driver/blob/master/lib/mongo/util/pool_manager.rb#L259
[13:12:31] <mnaumann> hi, i've setup my first abiter yesterday, now i realise it doesn't use authentication (the other nodes do). how can i add authentication to the mongo shell of an arbiter? do i need to?
[13:16:09] <remonvv> if you can't remove auth altogether (you shouldn't really need it) you can add it to the arbiter just like any other node afaik.
[13:16:28] <remonvv> doxavore, not sure what's happening but it sounds like your node is having to deal with more data than the hardware can handle.
[13:17:05] <doxavore> it was running fine on a smaller machine - this one has faster disks and 2x the RAM :-/
[13:17:47] <remonvv> Ah, that does make it a little weird. It's hard to diagnose from a distance.
[13:18:15] <remonvv> Is it hot? As in did you give it time to swap the warmer data into memory?
[13:18:29] <remonvv> faults in mongostat should be an indication
[13:18:50] <doxavore> yeah, it's hanging around 0-2 faults/sec, trending toward 0
[13:21:50] <remonvv> replica set?
[13:36:41] <doxavore> remonvv: Yes, all servers more powerful than what we were running on a few days ago. All reads go to master.
[13:44:51] <remonvv> Strange.
[13:45:55] <remonvv> Ruby isn't a language I'm very comfortable with but your nil exception seems to be something that should never happen
[13:47:38] <remonvv> your application is actually configured to connect to the repset as a repset I assume?
[13:55:02] <doxavore> remonvv: Yes, the config is much the same as we've been using for about 18 months now, save for adding pool_size after moving to JRuby recently
[13:55:54] <doxavore> I'm at a loss as to cause/effect, but we also see periods where the DB seems to go into a complete lock, with locked % reported by mongostat of 500+ and queries dropping to 0
[14:04:05] <remonvv> hm, and you have results for db.currentOp() during that time?
[14:14:42] <[AD]Turbo> I'm getting this error (updating a capped collection): MongoError: failing update: objects in a capped ns cannot grow
[14:14:47] <[AD]Turbo> what does it mean?
[14:16:00] <[AD]Turbo> mongodb 2.0.4 64bit
[14:18:04] <remonvv> [AD]Turbo, exactly that. You cannot update documents in a capped collection that cause the document to grow in size. Only in-place operations are allowed.
[14:18:37] <[AD]Turbo> ah
[14:18:40] <[AD]Turbo> I see
[14:42:16] <remonvv> The reason is that mongodb reserves a specific amount of bytes for that documents and it counts on that not changing to be able to do some of the optimizations that come with capped collections
[14:56:46] <[AD]Turbo> remonvv, many thanks!
[14:58:36] <kali> remonvv: you can workaround this by inserting padding when you create the documents in the cap collection (add a field: padding: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX") and $unset it when you add your fields
[14:58:50] <kali> [AD]Turbo: not remonvv
[14:58:55] <kali> i know it's ugly
[15:00:41] <NodeX> we love hackish things
[15:00:47] <NodeX> they're what make the world go round
[15:04:21] <[AD]Turbo> kali, thanks for the tip
[15:09:48] <remonvv> kali, true ;) But when you find yourself updating things in capped collections that require that hack a lot you're probably using them wrong
[15:15:26] <aminal> he guys, i'm getting errors like this: Assertion failure: _unindex failed: assertion db/pdfile.h:474 all throughout my logs for two of my replica sets. i'm running a 3 shard x 3 repl set member setup on version 2.0.4. googling this kind of sends me all over the place. any ideas what could be going on here?
[15:16:57] <TTimo> is there a way I can use the mongo shell on a secondary to run read only operations? I keep getting a "not master and slaveOk=false" and I don't understand how to work around it
[15:17:19] <TTimo> in this particular case, I enabled profiling on this secondary, and I can't even look at the results
[15:18:26] <MongoDBIdiot> connect to the slave
[15:20:34] <TTimo> I don't have slaves, unless you mean secondaries in the replica set, which is more or less the same, and that's what I'm connected to ?
[15:22:25] <kali> remonvv: definitely :) but worth mentioning if it can helps somebody
[15:23:05] <kali> TTimo: yep, slave == secondary. run rs.slaveOk() in the shell
[15:23:25] <kali> TTimo: slave reffers to the obsolete "master/slave" replication system
[15:23:48] <TTimo> yup that was my understanding as well
[15:23:53] <TTimo> how are you btw :)
[15:24:17] <kali> TTimo: quite good, octplane says hi
[15:25:07] <TTimo> ah :)
[15:25:41] <TTimo> is that on #francaise? which network is this on again
[15:26:41] <kali> TTimo: freenode now, but it's quiet :)
[15:28:41] <coldwind> I'm developing an C# app with MongoDB where I store per-user tags for each document; I have tried two of the possible dictionary representations that the C# driver provides by default but both have serious problems for querying ( http://pastie.org/5104033 ). Anyone got a suggestion about how to approach this?
[15:33:27] <coldwind> The main problem with the Document representation is that keys (usernames) would need to be strings with a limited set of characters or escaped; another problem is that .distinct() does not work on usernames. The problem with ArrayOfDocuments representation is that I cannot "$addToSet" to a tag to a user that does not exist yet.
[15:39:38] <Derick> NodeX: I forget things because there is no Jira ticket :P
[15:50:21] <Exitium> Hey, since upgrading to 2.2, we're getting a lot of cursor timeouts in php, any suggestions?
[15:51:17] <MongoDBIdiot> well, you could use a database...
[15:51:31] <Exitium> Erm?
[15:55:12] <Exitium> I can provide much more detailed information if someone wants a bash at helping to resolve the issue?
[15:56:13] <MongoDBIdiot> kick mongodb, use postgres
[15:57:39] <Exitium> I see where you get your nick.
[15:59:56] <NodeX> 1 sec Exitium :
[16:00:11] <Exitium> kk
[16:01:04] <NodeX> can you show me the connection string?
[16:01:12] <NodeX> I'll hell you debug
[16:01:15] <NodeX> help*
[16:01:19] <Exitium> Sure
[16:01:50] <Exitium> $con = new Mongo("mongodb://db1:27017");
[16:02:02] <MongoDBIdiot> perfect, db1 :->
[16:02:25] <NodeX> are the timeouts in queries or connections?
[16:02:32] <Exitium> Queries
[16:02:55] <Exitium> Actually, let me ask the dev to double check, 2 secs
[16:03:07] <NodeX> ok, the default timeout is 30 seconds, can you paste the explain() of a query
[16:03:33] <Exitium> Yep, can confirm it's on query
[16:03:44] <Exitium> Let me get an explain real quick
[16:04:20] <NodeX> @Derick : it was a joke!
[16:04:46] <Exitium> Hmmm, unfortunately, the lead dev just went into a meeting -.-
[16:04:59] <NodeX> for a quick fix you can add this to your code...
[16:05:09] <Exitium> timeout = -1? :)
[16:05:17] <NodeX> MongoCursor::$timeout = -1;
[16:05:50] <Exitium> We did, but then we noticed a major slowdown, but when tailing the mongo logs, the longest running query was 278ms… :/
[16:06:18] <Exitium> We've currently got out hosts checking all the hardware, switches, firewalls and cables for a possible problem
[16:06:22] <Exitium> our*
[16:06:33] <Exitium> Just wondered if it was a known 2.2 issue
[16:06:33] <NodeX> the slowdown is because it's not timing out
[16:06:52] <Exitium> Ahhh, well, that does make sense :P
[16:06:55] <NodeX> they started appearing for me at 2.2 also in certain queries, so I adjusted some of my indexes and it wen away
[16:07:03] <NodeX> went*
[16:07:15] <Exitium> An index issue then?
[16:07:30] <NodeX> I think the reporting got a little better perhaps in 2.2 but I also updated the driver so it could be that too
[16:07:35] <NodeX> for me it was indexes
[16:07:44] <NodeX> an upsert on one iirc
[16:08:00] <Derick> NodeX: so I *am* getting an Xmas card? :-)
[16:08:00] <NodeX> - upsert on an unindexed field against 4m docs
[16:08:09] <NodeX> Yes Derick : dont worry ;)
[16:08:11] <NodeX> :P
[16:08:28] <Exitium> We only have 2 or 3 indexes on that collection I think, not 100% sure, I didn't write the software, but I'll add that to the list to explore
[16:08:33] <NodeX> if you add domain sockets back you can have two cards ;)
[16:08:58] <NodeX> also if it's an update a safe=true might stall it
[16:09:13] <NodeX> if for instance it's waiting for a "w=1" from the node
[16:09:28] <Exitium> No safes methinks
[16:09:58] <NodeX> :S
[16:10:00] <Exitium> Nope, just checked a portion of the code, just a normal update
[16:10:19] <NodeX> I would say it's index bound and that's what's causing the timeout
[16:10:28] <NodeX> unless it's somehting really spooky
[16:11:12] <Exitium> Problem is, I've only really been told of the issue and been told to fix it lol, I didn't actually write the code or I'd have a better understanding of what's going on -.-
[16:11:34] <Exitium> I don't even know how many indexes they're using, or need etc.
[16:11:56] <NodeX> monitor the query log
[16:12:09] <Exitium> Is it possible to downgrade to the prior version temporarily?
[16:12:12] <NodeX> look for the query that it happens most on, it will tell you what happened
[16:12:15] <NodeX> then add the index
[16:12:23] <MongoDBIdiot> same as with upgrading
[16:13:03] <Exitium> We have an event tomorrow, so just need it working at 100%, then I can focus more on debugging it
[16:13:08] <NodeX> Exthium : you can just stop that one and spin up another one
[16:13:11] <NodeX> using the same dir
[16:13:16] <NodeX> data dir*
[16:13:50] <Exitium> Okies, that could work. No data incompatibilities?
[16:14:02] <NodeX> not from 2.0
[16:14:06] <Exitium> I doubt it, but best to ask
[16:14:15] <NodeX> 1.1 might be a problem :P
[16:14:15] <Exitium> I think we were 2.0.6 before, not 100% sure
[16:15:44] <Exitium> brb
[16:19:20] <Exitium> back
[16:24:52] <Exitium> Thanks for the help and pointers NodeX, much appreciated. I may be back, but hopefully we find a good fix :)
[16:26:20] <NodeX> no probs, good luck
[16:27:17] <MongoDBIdiot> hopefully not
[16:33:58] <NodeX> what's the point in talking MongoDBIdiot if you're going to be not only unhelpful but a hinderance
[16:34:01] <NodeX> ?
[16:34:22] <MongoDBIdiot> how are you?
[16:34:26] <MongoDBIdiot> channel police?
[16:34:57] <_m> NodeX: Just /ignore a fool
[16:35:51] <NodeX> it's funny that trolls actually have nothing better to do than be retarded in a channel that's sole purpose is to help people
[16:36:02] <MongoDBIdiot> done
[16:37:05] <NodeX> bye bye
[16:37:13] <NodeX> time for food
[16:37:17] <MongoDBIdiot> NodeX: i thought you are nominated as mongodb fool?
[16:38:01] <NodeX> tut tut
[16:38:39] <NodeX> sorry, I didn't hear you?
[16:38:45] <MongoDBIdiot> then stfu
[16:38:56] <NodeX> or what?
[16:39:05] <NodeX> what exactly are you going to do LOL
[16:40:04] <TTimo> he's good at what he does. very dedicated.
[16:40:11] <NodeX> lol
[16:40:20] <NodeX> he is dedicated to being his nickname
[16:40:34] <NodeX> a person very angry at the world :/
[16:40:42] <NodeX> I feel sorry for him
[16:41:02] <NodeX> has probably never touched a naked member of the opposite sex
[16:41:16] <MongoDBIdiot> NodeX: how could you survive your abortion?
[16:41:22] <NodeX> quite easily
[16:41:29] <NodeX> I had suckers for feet
[16:41:33] <MongoDBIdiot> i thought they get rid idiots in life pretty early :-)
[16:41:53] <NodeX> dude, what's the difference between your mom and my washing machine
[16:42:07] <MongoDBIdiot> what's the difference between an idiot and you?
[16:42:09] <MongoDBIdiot> none:)
[16:42:10] <NodeX> when I dump a load in my washing machine she doesn't follow me round for three days
[16:42:37] <NodeX> is that honestly the best you can come up with?
[16:42:54] <NodeX> what are you like 12?
[16:43:00] <MongoDBIdiot> your mother fucked with a pig?
[16:43:08] <NodeX> she did
[16:43:12] <NodeX> that's how I was born
[16:43:17] <MongoDBIdiot> as we can see
[16:43:20] <MongoDBIdiot> and you're the result
[16:43:23] <MongoDBIdiot> freak
[16:43:27] <NodeX> Correct
[16:43:45] <MongoDBIdiot> can someone kick this idiot?
[16:43:49] <NodeX> lol
[16:43:53] <NodeX> no probs
[16:44:32] <MongoDBIdiot> ciao, time for bed
[16:44:35] <MongoDBIdiot> *g*
[16:44:39] <NodeX> you and MacYET both have the same attitude, and you're both german
[16:44:48] <NodeX> you give your fellow countrymen a bad name
[16:45:00] <NodeX> or perhaps you are he
[16:45:01] <_m> This conversation is an epic facepalm.
[16:45:23] <NodeX> I like to do my bit for charity
[16:45:32] <NodeX> he probably doens't get played with at home
[16:46:06] <NodeX> lmao
[16:46:12] <NodeX> oink oink
[16:55:49] <MongoDBIdiot> here we go *g*
[16:55:56] <NodeX> welcome back
[16:56:06] <MongoDBIdiot> yes, my dear
[16:56:08] <NodeX> I missed you
[16:56:19] <NodeX> my life felt incomplete without you to talk to
[16:56:21] <MongoDBIdiot> i thought so
[16:56:36] <MongoDBIdiot> mine as well, I missed you worthless with of IT crap
[16:56:41] <NodeX> so tell me, why are you such an aggogant obnoxious person?
[16:56:42] <MongoDBIdiot> piece of IT crap
[16:56:51] <NodeX> arrogant*
[16:56:52] <MongoDBIdiot> you're Eliza?
[16:56:59] <MongoDBIdiot> may I tell you my problem?
[16:57:01] <NodeX> maybe
[16:57:10] <NodeX> if you answer my question
[16:57:16] <NodeX> are you related to MacYET?
[16:57:37] <MongoDBIdiot> obnoxious is in the eye of the beholder
[16:57:41] <MongoDBIdiot> perhaps look into the mirror
[16:58:04] <NodeX> I am and I see me but please answer why you're so angry at the world
[16:58:12] <NodeX> or at least angry at #mongodb
[16:58:21] <MongoDBIdiot> weltenschmerz!
[16:58:34] <NodeX> doesn't comply
[16:58:48] <MongoDBIdiot> do you love me?
[16:58:55] <NodeX> very much
[16:59:05] <MongoDBIdiot> do you want to lick my dick?
[16:59:16] <NodeX> only if it's not clean
[16:59:21] <MongoDBIdiot> perfect
[16:59:24] <MongoDBIdiot> and shaved
[16:59:49] <NodeX> but seriously, why do you hate everyone
[17:00:14] <NodeX> are we going to read about you on the news tomorrow where you went to a school with a gun and shot someone for bullying you?
[17:00:50] <_m> That awkward moment when CouchDB stopped relaxing…
[17:01:02] <MongoDBIdiot> we only shot mongodb idots
[17:01:06] <MongoDBIdiot> :-)
[17:01:29] <NodeX> we?
[17:02:09] <MongoDBIdiot> i me you
[17:02:11] <MongoDBIdiot> all
[17:02:13] <MongoDBIdiot> the world
[17:02:15] <MongoDBIdiot> the collective
[17:02:19] <NodeX> ok cool beans
[17:02:26] <NodeX> so why are you so angry at "we"
[17:02:44] <MongoDBIdiot> you are truely the #mongodb idiot here
[17:02:53] <MongoDBIdiot> IQ close to zero I would assume
[17:03:05] <NodeX> that being said, why are you angry at everyone?
[17:03:09] <MongoDBIdiot> which comes close to the average IQ of MongoDB devs
[17:03:21] <NodeX> cool story kid, but again why so angry?
[17:03:39] <MongoDBIdiot> you bother me, sister
[17:03:52] <NodeX> how so?
[17:04:01] <NodeX> because I know what I am doing, and that annoys you?
[17:04:04] <MongoDBIdiot> you little dirty cock sucker :)
[17:04:08] <MongoDBIdiot> now bed time :)
[17:04:21] <NodeX> lolol
[17:14:46] <jrdn> lmao
[17:17:51] <tystr_> lol...wtgff
[17:21:27] <NodeX> dang XBL update
[17:23:32] <jrdn> MongoDBIdiot is my hero
[17:25:28] <NodeX> mine too
[17:25:41] <NodeX> when I grow up I want to be just like him
[17:42:52] <Almindor> is there a way to get a dynamic document in csharp without mapping it to a class?
[17:43:21] <Almindor> we have subdocuments which could contain virtually anything in them and we need to get them into mono/csharp and work with them
[18:01:03] <NodeX> Almindor : that's kind of a large point of mongo / nosql so it should be possible
[18:26:16] <snizzzzle> I'm considering switching my entire database from mysql to mongo but I've heard that there are some reliability issues with mongo. Specifically, data corruption I've heard has been a problem. Is this true and what are some issues that I should familiar myself with?
[18:26:50] <NodeX> define corruption
[18:27:11] <TTimo> you shouldn't run standalone servers in production. basically you need to look at a 3 node replica set as your starting point
[18:27:29] <TTimo> that means a bit more infrastructure and setup difficulty
[18:27:32] <snizzzzle> NodeX: A thread was killed in the middle of execution and has corrupted a row of data
[18:27:59] <snizzzzle> TTimo: Are you referring to my question?
[18:28:09] <Gargoyle> snizzzzle: MySQL and Mongo are two totally separate things. I doubt you want to "switch" from one to the other. More likely, you are "Going to rewrite your app and make use of mongo in the new version"?
[18:28:11] <TTimo> yeah
[18:28:59] <snizzzzle> Gargoyle: No, mongo integrates nicely using an ORM. Implementation would be the same.
[18:29:30] <Gargoyle> snizzzzle: That's odd. Mongo is not relational!
[18:29:43] <NodeX> snizzzzle : in a data store sense maybe but the functionality differs
[18:30:18] <Gargoyle> Derick: Are you around?
[18:30:29] <snizzzzle> Gargoyle: I know but django facilitates the use of this. I will first consider using mysql for my main purpose database and then using mongo as a side db for collections of data.
[18:31:07] <TTimo> snizzzzle: mongoengine or django-mongo ?
[18:31:47] <snizzzzle> CTO didn't specify
[18:33:14] <snizzzzle> TTimo: We would be starting with using mongo as a side option and then eventually pushing everything over to mongo but I wanted to see if there are reliability and data corruption issues....
[18:33:21] <TTimo> anyway, there are plenty of options to balance how fast versus safe you want to be with your data
[18:34:09] <NodeX> snizzzzle : if a thread dies in a mysql operation what happenes?
[18:34:32] <TTimo> yeah that particular example is probably no different
[18:34:39] <NodeX> :D
[18:35:04] <TTimo> I have yet to see something like that happen in mongo, but I've only been running this stuff for a few weeks :)
[18:35:07] <snizzzzle> NodeX: Mysql should be atomic in all operations.
[18:35:19] <NodeX> should be or is?
[18:35:21] <snizzzzle> NodeX: Either the write happens or it doesn't
[18:35:31] <NodeX> mongo is the same
[18:35:40] <NodeX> fire and forget ;)
[18:36:00] <snizzzzle> NodeX: InnoDB is
[18:36:06] <snizzzzle> NodeX: pretty sure
[18:36:12] <Gargoyle> snizzzzle: Mongo just gives you the option of not waiting around to find out! :)
[18:36:27] <TTimo> there are some interesting options though. you can have your operations wait until some number of replication slaves report having replicated your change
[18:36:31] <NodeX> snizzzzle : you're talking about transactions?
[18:36:37] <NodeX> if so they do not exist in mongo
[18:36:46] <snizzzzle> NodeX: oh
[18:37:33] <snizzzzle> NodeX: We're trying to get past the mysql issue of doing migrations on production. It takes down out service about 10 minutes for a single added column...
[18:37:43] <snizzzzle> our*
[18:39:18] <NodeX> I dont know what that means sorry
[18:39:41] <TTimo> snizzzzle: yeah you'll get that
[18:39:52] <TTimo> seems like a rather light reason to undergo such a transition though
[18:40:08] <TTimo> if you are expecting a drop in replacement, you'll have some surprises
[18:40:18] <TTimo> although that depends how complex your RDBMS code is
[18:40:34] <snizzzzle> TTimo: It's enough to become a problem for us
[18:40:34] <TTimo> if you're not really doing anything that needs to be transactional to work, you might be fine
[18:40:44] <TTimo> oh you know the problem you have now
[18:40:51] <TTimo> do you know the problem you're about to take on :)
[18:41:11] <aster1sk> Hey mongoers, attempting to perform an "OR LIKE" but not having much luck.
[18:41:17] <snizzzzle> TTimo: No, haha. That's why I'm here
[18:41:42] <Gargoyle> aster1sk: Your looking for $or and regex
[18:41:55] <NodeX> aster1sk : you wont find anything that;s efficinet
[18:41:58] <NodeX> efficient
[18:42:02] <aster1sk> NodeX: Yup.
[18:42:17] <aster1sk> It's the order I need.
[18:42:28] <aster1sk> Does the $or wrap the $regex or the other way around?
[18:43:28] <mordocai_work> Hello, I have a question on how to best do something the "mongo way" or nosql way. I have an inventory management app I'm working on and I found that on certain things I need what amounts to an enumeration in sql. I want to be able to store a list of options to be able to be used for a dropdown, but I want them in the database so they can be changed easily through the web interface. What is the best way to do this with mongo? (I
[18:43:33] <Gargoyle> aster1sk: IIRC it $or: [ {}, {}, {} ] with your conditions in the {} as normal.
[18:43:34] <NodeX> hoiw you mean aster1sk ?
[18:44:11] <Gargoyle> aster1sk: If you anchor your regex at the start, it will use an index (as long as you don't specify case insensitive option) .
[18:44:16] <aster1sk> Ok I have an audio player app (it's really cool) -- I have a document with id3 tags from mutagen.
[18:44:35] <aster1sk> I'm searching artist, track title or album.
[18:44:53] <aster1sk> Regex is case insensitive.
[18:45:07] <NodeX> you might be better putting them in an "idx" field
[18:45:10] <NodeX> as an array
[18:45:10] <Gargoyle> aster1sk: Only if you tell it to be!
[18:45:10] <aster1sk> It's not a massive document and performance is not a huge concern, it's not a public facing app.
[18:45:35] <mordocai_work> I of course thought of just making a collection containg them, but I was wondering if there was a better way.
[18:45:38] <aster1sk> Gargoyle: yeah, sorry I meant I had '/query/i'
[18:46:00] <Gargoyle> aster1sk: Yeah. That won't use an index. But /^query/ will
[18:46:10] <aster1sk> Don't care about that.
[18:46:20] <aster1sk> I care about being able to search more than one item.
[18:47:03] <NodeX> aster1sk : better to store like this as another field c..... idx_field : ['title','artist','trackname'] ... all lower cased and split on space
[18:47:04] <aster1sk> $regex : { title : { $regex : 'query' } } works perfectly
[18:47:16] <NodeX> also reversed for LIKE %%
[18:47:28] <aster1sk> Hmm, yeah that makes sense, but it's too late to switch that all around.
[18:47:53] <aster1sk> You guys are very focused on performance (for good reason) but for this case I don't care about that.
[18:48:07] <aster1sk> I just want to be able to search multiple keys.
[18:48:26] <NodeX> that above is the best of both worlds
[18:48:35] <Gargoyle> aster1sk: I think it just $or: [ {title: whatever}, {artist: whatever}, etc]
[18:48:45] <aster1sk> It's true, perhaps on my import I can just concat the strings
[18:49:01] <aster1sk> I'll give it a shot.
[18:50:52] <aster1sk> data['search'] = audio['title'][0] + " " + audio['artist'][0] + " " + audio['album'][0]
[18:50:55] <aster1sk> Added that, should works.
[18:51:53] <aster1sk> Extremely fast guys, thanks!
[18:52:09] <aster1sk> Such a simple solution -- you guys are the best.
[18:53:40] <aster1sk> This project is actually really neat, I may open it up on github.
[19:50:09] <tjr9898> I'm running into an error with the pymongo getting started step of the class
[19:50:25] <tjr9898> I'm running ubuntu and error is at http://pastebin.com/TNM7Xvs9
[19:50:57] <diogogmt> I'm having trouble setting the content-type for a GridFS file. I'm using the node module gridfs-stream.
[19:51:09] <diogogmt> I'm passing the contentType in the options object: http://pastebin.com/sReENDVw
[19:51:30] <diogogmt> but the content type is being set as binary
[19:51:34] <diogogmt> any suggestions?
[19:53:12] <diogogmt> The official mongodb GridFS doc page says to pass the content-type in the options obj: http://mongodb.github.com/node-mongodb-native/api-generated/gridstore.html
[19:55:28] <diogogmt> anybody?
[20:29:21] <modcure> is there like a explain table in mongo ?
[20:29:27] <modcure> describe
[20:29:38] <modcure> want to see the structure of the document
[20:30:17] <mids> there is no schema
[20:30:29] <mids> you could look at a single document in the collection
[20:30:35] <mids> db.foo.findOne()
[20:34:55] <modcure> mids, thanks
[20:39:14] <newbie22> *: Is anyone out there ???
[20:40:54] <BurtyB> no
[20:42:02] <Nerp> when I run rs.stepdown() on the primary of one of my shards, it causes most of the mongos processes attached to it to crash, is there a better way to step a primary down that will prevent this from happening?
[20:49:28] <TTimo> pymongo requiring explicit closure of connections to replicaset is being a royal PITA
[20:56:53] <diogogmt> The GridStore of the mongodb native nodejs driver doesn't set the contentType for a gridfsFile if the mode is set to +w(edit). If the mode is set to w(truncate) then the contentType is property set, anybody knows why?
[21:05:10] <electricowl> Hi. Any suggestions for dealing with seemingly random MongoCursorExceptions in PHP?
[21:06:13] <Gargoyle> I'm thinking about reworking our models to use mongo directly to better leverage the drivers capabilities and to reduce memory usage. Probably best description would be some kind of hybrid model/odm. Anyone heard of anything similar?
[21:06:25] <ryan_> getting 'MongoCursorException' with message 'couldn't send query:' on php mongodb driver 1.2.12 - is the only option to downgrade?
[21:06:56] <Gargoyle> ryan_: Upgrade?
[21:07:16] <ryan_> isn't 1.2.12 the latest driver?
[21:07:28] <Gargoyle> you cout go for the beta.
[21:07:29] <Gargoyle> :)
[21:07:33] <Gargoyle> *could
[21:07:54] <ryan_> will that fix the issue?
[21:08:02] <Gargoyle> Dunno!
[21:17:18] <newbie22> *: hello everyone, I am having trouble getting mongodb to run on my 32 bit windows XP system. Anyone have any reccomendations ????
[21:17:45] <newbie22> Error Message starts : "the procedure entry point interlockedcompareexchange64"
[21:18:05] <Gargoyle> newbie22: Buy a Mac ?
[21:18:08] <Gargoyle> :P
[21:18:24] <Gargoyle> Install ubuntu!
[21:19:18] <Gargoyle> newbie22: Also, I think you are going to run into many problems along the way if you use a 32 bit system.
[21:20:02] <newbie22> Gargoyle: well they still have available a 32 bit download for windows .........
[21:22:18] <Gargoyle> newbie22: Can't offer much more help - been windows free (apart from web testing) for years. 32bit = old and mongo = new. I think the support will be limited.
[21:23:29] <newbie22> Gargoyle: you are correct, it is..............I have been reading aobut it so, I will be purchasing a PC to install linux on next month... This is not my PC, so I cannot do a complete reconfig
[21:23:41] <Gargoyle> newbie22: Did you read the note on the download page? Says pretty much the same thing.
[21:23:58] <newbie22> Garygoyle: Yes I did, thanks for the help...
[22:29:30] <aster1sk> 'what a difference a key made'
[22:29:38] <TTimo> ahah
[22:29:42] <TTimo> words to live by
[23:39:15] <Hoverbear> Hi all, I'm working with mongojs (https://github.com/gett/mongojs) and would like to have a reference in my foo document that links to a user (via ID) in my user document…. Could I do that within mongo, or do I need to process it myself in callbacks?
[23:41:06] <woodzee> i have a setup that has 3 shards that are all replicated. 9 instances all together. i want to change the replica sets to be using different host name on the secondary members. each time i try and remove the existing entries i get { "$err" : "socket exception", "code" : 11002 } when i run db.printShardingStatus()
[23:41:29] <woodzee> mongo version 2.0.4
[23:43:01] <Hoverbear> Oh man, I should just use mongoose