PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 5th of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:11:56] <timeturner> how do you guys deal with temp users (which need to confirm their accounts) and switching those accounts over to a full user when they do confirm?
[00:12:07] <timeturner> currently I just have two separate collections
[00:12:12] <timeturner> tempusers and users
[00:12:49] <svenstaro> timeturner: why not keep shit simple and set a flag?
[00:13:11] <timeturner> flag on the user?
[00:13:14] <svenstaro> ye
[00:13:16] <timeturner> so just one collection
[00:13:19] <svenstaro> yep
[00:13:45] <timeturner> but the problem is that I would have to check if the "code" field exists every single time
[00:13:50] <timeturner> for the confirmation code
[00:14:20] <svenstaro> in the other circumstance you'd have to check whether a user existed in a whole collection!
[00:14:31] <timeturner> which I already do
[00:14:41] <timeturner> oh you mean the tempusers
[00:14:51] <svenstaro> looking up a single flag is rather fast
[00:14:58] <svenstaro> since you already have the user
[00:15:06] <svenstaro> but finding a user every time is a lot slower
[00:19:17] <timeturner> So when a user logs in with email and pass I would: query by id (which has a unique index) and return the whole doc (which I do anyway) and then check to see if the 'code' field exists and if it does then I'd tell the user that they haven't confirmed yet and if they don't have a 'code' field then I'd check their pass against the pass in the db (via hash check etc.) and then if that completes
[00:19:18] <timeturner> successfully I would shove all the relevant fields into their session on the server
[00:19:25] <timeturner> If I'm thinking about this correctly
[00:20:12] <timeturner> I would actually have to make the 'code' field indexed as well
[00:20:30] <timeturner> sparse, unique index
[00:20:41] <timeturner> since I would have to query by that when the user confirms
[00:21:03] <timeturner> or I can just shove their email into the query string I guess
[00:21:15] <timeturner> saves an index
[00:21:22] <Jester01> and adds some security :)
[00:21:34] <timeturner> definitely
[00:21:51] <timeturner> sweet so what do you think about my plan above? :D
[00:22:35] <timeturner> I mean basically I'm trying to figure out if I can make it any simpler
[00:22:51] <Jester01> sounds simple enough to me
[00:23:02] <svenstaro> a second collection definitely is not simple
[00:23:07] <timeturner> yeah
[00:23:19] <timeturner> it was a headache maintaining it anyways
[00:23:31] <timeturner> plus I had to deal with two-phase commit crap...
[00:23:44] <timeturner> I have no idea why I thought that was a good idea in the first place
[00:24:57] <svenstaro> heh
[00:25:06] <svenstaro> always strive for the simplest solutions
[00:25:27] <timeturner> definitely. awesome, thanks!
[00:25:32] <svenstaro> no problem
[00:25:38] <timeturner> going to try this... :D
[01:45:11] <jtomasrl> what a Replica config does
[02:06:15] <emocakes> jtomasrl
[02:06:17] <emocakes> replicates
[04:07:34] <dirn> timeturner: not sure if you're still around. I've been on and off IRC all day so I'm not sure, did you ever get an answer to your question?
[06:37:01] <NodeX> anyone alive who's adept with the aggregation framework
[06:37:02] <NodeX> ?
[07:32:24] <[AD]Turbo> hi there
[07:32:44] <apetrescu> G'day
[07:34:13] <emocakes> lol
[07:34:21] <emocakes> are you from 'down under' apetrescu
[07:34:30] <emocakes> toss another shrimp on the barbie
[07:34:31] <emocakes> haha
[07:34:34] <emocakes> silly australians
[07:35:58] <apetrescu> :P
[07:44:24] <NodeX> I've got to say that the aggregation framework is not consistent
[07:45:29] <NodeX> I have 3 different queries, 2 that work with $match and give expected results and one that doesn't but if I run it without $match I get expected results, it's very strange
[07:45:52] <cmex> good morning
[07:46:31] <Aartsie> Goodmorning !
[07:52:30] <Derick> Gargoyle: up again
[07:52:53] <Gargoyle> Derick: Morning.
[07:53:24] <Derick> moin :)
[07:53:59] <Gargoyle> Derick: I was wondering if xdebug could provide any insight into my apache segfault? I enabled debug level logging and coredumping, but the last segfault left no core dump, and nothing useful in the logs.
[07:54:31] <Gargoyle> I've got a copy of the server running under a vm, but I can't get the bloody thing to crash!
[07:54:39] <Derick> heh
[07:54:42] <Derick> typical
[07:54:51] <Derick> xdebug will tell you til which line the script runs
[07:56:02] <Gargoyle> Derick: Would be a start. At least if I can narrow it down then perhaps I could force a crash under more controlled conditions.
[07:56:06] <NodeX> Derick : how stable is 1.30 beta1?
[07:56:13] <Derick> NodeX: beta2 you mean?
[07:56:20] <Derick> it has one annoying bug which is fixed
[07:56:36] <Derick> if you don't use auth with replicasets, it should (no warranties!) work better than 1.2.12
[07:57:20] <NodeX> i dont and dont, is the aggregate() helper inside it?
[07:57:45] <Derick> yes
[07:58:06] <Derick> (it has a small memleak though in beta2), but I'm about to merge that fix
[07:58:12] <kali> NodeX: is this about your aggregation issue ? the inconsistency comes from the php driver ?
[07:58:23] <NodeX> cool, this aggregate framework using db->command() is a nightmare
[07:58:29] <Derick> it works :-)
[07:58:30] <NodeX> kali, no
[07:58:40] <kali> NodeX: ok
[07:59:03] <NodeX> the inconsistency comes from Mongo giving different results for different queries, in one of them it shouldnt even give a result but does
[07:59:37] <NodeX> All I can narrow it to is when Mongo can't find a key it is summing everything else in the pipeline and returning that
[07:59:48] <NodeX> (I assume it's intended behaviour)
[08:00:02] <NodeX> Derick : is it a simple pecl update?
[08:00:09] <Derick> to beta2, yes
[08:00:12] <NodeX> slash php restart
[08:00:24] <Derick> beta2 has an annoying bug with secondaries though
[08:00:33] <Derick> Gargoyle found that one
[08:00:47] <Derick> we will probably do RC1 on Friday
[08:01:06] <Gargoyle> Derick: I'm currently running HEAD. :S ;)
[08:01:15] <NodeX> err upgrade is trying to install 1.2.2
[08:01:17] <Derick> Gargoyle: yeah, that's best for now
[08:01:24] <Derick> NodeX: pecl upgrade mongo-beta
[08:01:39] <NodeX> thanks
[08:01:53] <kali> NodeX: all right, at least, it's predictable ?
[08:02:00] <kali> NodeX: and reproducible ?
[08:02:38] <NodeX> kali, yes
[08:02:49] <NodeX> mainly it's a problem with $sort
[08:03:00] <NodeX> it seems $sort has to live outside the pipeline on certain queries
[08:03:22] <NodeX> if you put it inside mongo moans about large result sizes above 16mb
[08:03:41] <NodeX> I'll try and nail it down with some examples and submit a report later if I get time
[08:06:33] <NodeX> Derick : upgrade killed my app server :/
[08:06:40] <Derick> how?
[08:06:48] <Derick> you are not doing this in production, are you?
[08:07:46] <NodeX> Fatal error: Uncaught exception 'MongoConnectionException' with message 'Couldn't connect to ':27017': Couldn't get host info for '
[08:08:02] <NodeX> Of course I am doing it in production lol, why else would I ask if it's stable :P
[08:08:46] <NodeX> I guess there is another bug for you! ... I connect with Sockets not tcp ..
[08:09:58] <Derick> NodeX: hmm, what? I didn't know that was even possible :S
[08:09:59] <NodeX> mongodb:///tmp/mongodb-27017.sock <------ No worky on the 1.3 beta2 as a connection string
[08:10:19] <NodeX> it's a faster connect as you dont involve the TCP stack lol
[08:10:31] <Derick> yeah, I know
[08:10:48] <Derick> it's funny, as our connection string specs don't even mention this... and hence I didn't implement it
[08:11:03] <cmex> i have a question about replica set
[08:11:11] <Derick> NodeX: also, never run a beta in production :P
[08:11:42] <NodeX> It's not in the docs on the PHP site, I arrived at it with trial and error
[08:11:54] <cmex> is it something on system level need to be configured when master server going down ? or on aplication level?
[08:11:59] <NodeX> Derick : beta in production.. to late now!!
[08:12:13] <Derick> NodeX: hmm, I will bring this up - I think it's something that was totally overlooked
[08:12:50] <NodeX> it's nps, at least you know about it now
[08:13:29] <Derick> nps?
[08:13:40] <NodeX> no probs
[08:13:43] <Derick> ah
[08:13:55] <Derick> I'd better head into work!
[08:13:57] <Derick> bbl
[08:14:08] <NodeX> ;)
[08:19:22] <jwilliams> https://jira.mongodb.org/browse/SERVER-4328 shows that db level lock is closed, but collection level locking ticke is still open.
[08:19:40] <jwilliams> does that mean mongodb 2.2 still use global lock?
[08:20:15] <jwilliams> or lock can be applied to collection w/t affecting other collections?
[08:21:20] <kali> jwilliams: in 2.2 the lock is database per database
[08:21:40] <kali> jwilliams: before 2.1/2.2, it was process wide
[08:22:06] <kali> jwilliams: so it's "one step small" and all that
[08:22:14] <kali> "one small step"
[08:22:16] <cmex> someone?
[08:22:43] <kali> cmex: honestly i don't understand what you ask
[08:23:17] <cmex> kali we have 3 servers as said in tutorial about replica , the question is whats happened if master server is going down
[08:23:26] <cmex> the server im writing and reading from
[08:23:46] <cmex> is it something i need to do on aplication level or system level about it?
[08:24:00] <jwilliams> kali: what's the difference between lock on process wide and lock on database? or aren't prior to 2.1 the lock is always on database (global lock) already?
[08:24:00] <NodeX> an election takes place doesn;t it?
[08:24:14] <kali> cmex: the remaining host will elect a new primary, and your drivers will reconnect to the new master (after the first fail query)
[08:24:52] <cmex> what do u mean remaining host ... soorry for noob questions
[08:25:00] <kali> jwilliams: if you have two databases in your mongo, (as in "show db", "use my_database") they have separate locks
[08:25:07] <cmex> im calling to a server with ip
[08:25:13] <cmex> not hostname
[08:25:27] <kali> cmex: a host is a server
[08:25:50] <jwilliams> kali: get it. thanks.
[08:25:51] <kali> cmex: when the primary goes down, the secondary elect a new primary among themselves
[08:26:59] <cmex> ok lets say i have 3 servers 212.x.x.1 , 212.x.x.2 , 212.x.x.3 . when 212.x.x.1 is master . how does 212.x.x.2 knows if something happens and he must to be the master , and became slave when master is going up
[08:27:21] <wereHamster> cmex: periodical checks
[08:27:43] <kali> cmex: first of all, the preferred terminology is primary and secondary, not master and slave (which are referring to the archaic mecanism)
[08:27:55] <cmex> checks between 3 servers ?
[08:28:09] <kali> cmex: yes, they're checking their siblings availibility
[08:28:44] <cmex> ok lets say 212.x.x.2 is now maste . , but form application im trying to write to 212.x.x.1
[08:28:50] <cmex> it still fails to write
[08:29:05] <cmex> is it something im missing ?:)))
[08:29:31] <cmex> kali?
[08:29:32] <kali> cmex: when opennning a connection, the driver get from the server the current cluster configuration
[08:29:53] <kali> cmex: so when the primary goes down, the driver have wheat it need to reconnect
[08:30:15] <cmex> ok i got . and it sot a new connection string automaticaly?
[08:30:20] <cmex> *got
[08:30:32] <kali> cmex: and the driver accepts several hostname in its connection string, in order to be able to start when a host is down
[08:31:16] <cmex> kali: so i need to set several servers in connectionstring?
[08:31:55] <cmex> sorry guys i understend my questions are newbiews
[08:34:06] <cmex> ?
[08:35:20] <Gargoyle> cmex: This is mine, for example:- mongodb://ironton:27017,kenton:27017,kirtland:27017
[08:36:00] <cmex> ok i dont need to say in connectionstring whichone is primary?
[08:36:34] <cmex> and when primary is going down its automaticaly sets new primary server and connects oti it?
[08:36:38] <cmex> am i wrong?
[08:36:48] <cmex> *to it
[08:37:36] <cmex> im just want to understand the mechanism of this thing
[08:43:16] <kali> cmex: yeah, just list all the servers in the connection string, the driver will manage
[08:43:29] <cmex> kali: thanks alot
[08:43:35] <cmex> thanks alot to all
[08:50:50] <Gargoyle> I'm in segfault heven!
[08:58:04] <Gargoyle> I've found the request causing it, and now to track it down to a bit of code.
[09:09:14] <NodeX> FInaly narrowed it down, it seems in certain queries Mongo want's sort in one place of the pipeline and in others it wants it ourside the pipeline
[09:09:19] <NodeX> strange
[09:10:15] <NodeX> it's totaly dependant on the query and further more the query will still execute both ways but one way will return a SUM of everything in one result and the other it will return (expeted) array of say pages viewed plus the counts
[09:18:41] <Derick> Gargoyle: if you could run it in gdb, that'd be awesome. Are you using apache or php-fpm?
[09:18:59] <Gargoyle> Derick: Apache.
[09:19:04] <stevie-bash> I'm looking for mongodb20-10gen_2.0.6_amd64.deb, Is there a archive mirror?
[09:19:32] <Gargoyle> I've found the request causing it. trying to narrow down now. But it doesnt happen on my local machine.
[09:23:57] <Gargoyle> Derick: It's happening when I call mongo->save() after a short burst of mongo->update(blah, array('$set' => array(blah)))
[09:26:00] <Gargoyle> Just going to make sure it's nothing else in my save() code before it gets to the mongo bit.
[09:31:14] <Derick> can you reproduce on CLI? that'd make things easier
[09:31:22] <Gargoyle> Derick: It's happening when I call collection->save($data, array('safe' => 1))
[09:32:24] <Gargoyle> If i take out the safe option, it doesn't segfault.
[09:32:55] <Derick> k
[09:32:59] <Derick> that's a possible
[09:33:00] <Gargoyle> I'll see if I can get a CLI to behave the same.
[09:33:13] <Derick> to make it crash faster on CLI : export USE_ZEND_ALLOC=0
[09:34:29] <Gargoyle> Obviously, this is why it wont segfault locally. It's not a replSet
[09:35:03] <Derick> ah
[09:35:19] <NodeX> Gargoyle : does it still segfault when you do array('safe'=>(int)1) ?
[09:35:35] <Derick> NodeX: there is no difference
[09:35:55] <NodeX> most of mongo/php needs casting ... how come not this
[09:36:11] <Derick> nothing needs casting - if so, that could be a bug
[09:36:34] <NodeX> with the driver everything needs casting else it gets added as a string
[09:36:44] <NodeX> with the exception of an array
[09:36:56] <Derick> uh, no
[09:37:07] <NodeX> when did that change?
[09:37:10] <Gargoyle> NodeX: But if it is a 1 in code (not '1', or anything else that php might think of a 1)
[09:37:11] <Derick> it never did
[09:37:26] <Derick> but if you use a "1" instead of a 1, then (of course) it will be stored as a string
[09:37:32] <NodeX> Derick, that's untrue sorry
[09:37:37] <Derick> no, it's not
[09:37:56] <Derick> show me an example :)
[09:38:04] <NodeX> if you use for example... array('$set'=>array('foo'=>1)) .. mongo inserts it as "1" not 1
[09:38:27] <Gargoyle> NodeX: You might have just earned a gold star!!!
[09:38:28] <NodeX> in the php driver
[09:38:33] <NodeX> not on the shell;
[09:38:33] <Gargoyle> Lemme double check this!
[09:38:44] <Derick> NodeX: nonsense
[09:38:59] <NodeX> LOL ok, I'll prove it one sec
[09:40:22] <Gargoyle> NodeX: No star for you. must have been my typing!
[09:40:36] <Gargoyle> $success = $collection->save($data, array('safe' => (int)1)); Still segfaults
[09:41:03] <Derick> Gargoyle: can you show that in a 5 line script or so?
[09:41:05] <NodeX> safe is true/false, try true a sec
[09:41:18] <NodeX> ()yes I know 0=false, 1=true in php
[09:41:20] <Gargoyle> Derick: Going to try. Gimme 15-20
[09:42:00] <Derick> cheers
[09:42:10] <Derick> NodeX: true is indeed distinct from 1
[09:42:43] <Gargoyle> Derick: Only problem, is that ALL calls to save() in our app come through to the same place - and they are not all failing.
[09:42:51] <Zelest> Derick, using MongoDB and PHP, do you know of any nice way to encrypt fields inside a document?
[09:43:08] <Derick> Gargoyle: fun :-)
[09:43:15] <Derick> Gargoyle: the export that I showed might help
[09:43:24] <Derick> or if it doesn't, try running it with valgrind
[09:43:47] <Gargoyle> It's something specific to this record - but I'll try and get it into a script first
[09:43:53] <Derick> awesome, cheers
[09:46:35] <NodeX> Zelest : mcrypt?
[09:47:02] <Derick> Zelest: yeah, hm, that's tricky
[09:47:10] <Derick> as awlays, where are you storing your key? :)
[09:47:16] <Derick> NodeX: I wrote mcrypt ext too :)
[09:48:02] <Derick> NodeX: made a script yet btw? :)
[09:48:03] <Zelest> Derick, the idea is memcache. ;)
[09:48:14] <Derick> ah
[09:48:33] <Zelest> how strong/good is mcrypt?
[09:48:45] <Derick> it depends on the algo and keys really
[09:48:50] <Derick> don't use rot13 f.e.
[09:48:54] <NodeX> Derick : I am 100% positive that the PHP driver at least didn't used to cast
[09:49:06] <Derick> NodeX: the driver doesn't change types for documents
[09:49:08] <NodeX> I wrote a whole wrapper round the driver to cast everything
[09:49:09] <Zelest> aah, is it a native thingie or does it use something like openssl in the background?
[09:49:22] <Derick> it's native - through the libmcrypt library
[09:49:27] <Zelest> aah
[09:50:24] <NodeX> it seems it does cast now, but i remember a problem a long time ago where I was inserting ints and they ended up as strings or w/e then I would search them in the same code and they would not return unless I cast them on the insert and the query
[09:50:44] <Zelest> what is cast?
[09:50:45] <NodeX> 18+ months ago maybe
[09:50:58] <Zelest> I assume you mean MCRYPT_CAST_256 ?
[09:51:03] <NodeX> no
[09:51:05] <Derick> some algorithm
[09:51:05] <Zelest> oh
[09:51:08] <NodeX> I am talking to Derick
[09:51:23] <Zelest> yeah, MCRYPT_CAST_256 is the algo it uses. :P
[09:51:31] <Zelest> http://www.php.net/manual/en/mcrypt.ciphers.php
[09:51:37] <Derick> Zelest: it's one of them it can use
[09:51:49] <Derick> pick: MCRYPT_RIJNDAEL_256 (libmcrypt > 2.4.x only)
[09:52:06] <Zelest> :o
[09:52:26] <Zelest> strongest/slowest? or why that one?
[09:52:35] <Derick> strongest
[09:52:51] <NodeX> I use that one^
[09:52:59] <Zelest> i see
[09:53:11] <Zelest> I consider writing some sort of locker-site.. or something
[09:53:35] <Zelest> not a community of any form, but a little "for fun" page where people can store files or what not in a secure way
[09:56:41] <Zelest> it's not really needed or so, mostly for fun and to give it a try to build something secure :)
[09:59:19] <NodeX> Derick : I will find the Jira's that I posted about it and 10gen's responses to them later
[09:59:24] <Derick> sure
[09:59:25] <NodeX> I cannot recreate it
[09:59:39] <Derick> i'm curious to what that was :)
[09:59:39] <NodeX> I believe it must have been fixed in a driver since then
[10:00:07] <NodeX> at the time I remember Kristina telling me it was intended behaviour and to cast things on insert and query
[10:00:21] <NodeX> ergo I wrote a wrapper to do that
[10:00:54] <Derick> odd
[10:01:42] <NodeX> I thought it may have been on just one case maybe floats or something
[10:01:59] <NodeX> it was 3 laptops ago so I dont have my original testing code!
[10:10:56] <Gargoyle> script kiddies!
[10:10:59] <Gargoyle> [client 95.211.148.236] File does not exist: /var/www/w00tw00t.at.blackhats.romanian.anti-sec:)
[10:11:56] <Gargoyle> quickly followed by just about every combination of "phpMyAdmin" you can think of!
[10:18:00] <ppetermann> /var/www.. disgusting :p
[10:20:34] <Derick> nope! it's niiiice
[10:26:43] <ppetermann> i prefer /opt
[10:26:57] <Gargoyle> Derick: Won't crash in the CLI :/
[10:27:16] <NodeX> lol, a dir is a dir
[10:27:45] <Zelest> /usr/local/www/(apache2/nginx)/data ;-)
[10:27:56] <Gargoyle> ppetermann: /opt of for optional software!
[10:28:04] <Zelest> then just symlink it to /home/<user>/www.domain.tld :-)
[10:29:03] <ppetermann> Gargoyle: /var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files.
[10:29:45] <ppetermann> Applications must generally not add directories to the top level of /var. Such directories should only be added if they have some system-wide implication, and in consultation with the FHS mailing list.
[10:29:46] <Gargoyle> ppetermann: At the moment, this web app IS variable data! ;p
[10:30:26] <ppetermann> to be completely correct it should be below /srv ;)
[10:30:59] <Gargoyle> I think I actually used /srv on a gentoo setup many moons ago!
[10:31:47] <NodeX> I use a totaly separate cryptolooped mountpoint inside /home
[10:32:29] <ppetermann> for what you serve through http?
[10:33:18] <Derick> Gargoyle: no errors with the export and valgrind either?
[10:33:32] <Gargoyle> from the CLI?
[10:33:37] <Derick> it should be in /local really, yes
[10:33:47] <Derick> Gargoyle: yes
[10:34:35] <Gargoyle> export made no difference
[10:35:05] <Derick> meh
[10:35:22] <Derick> Gargoyle: can you paste your command line items please?
[10:35:37] <Gargoyle> DCC ?
[10:36:06] <Derick> or a pastebin
[10:36:09] <Derick> (or /msg)
[10:36:30] <Gargoyle> Derick: http://pastie.org/private/nliupe4hwallu3poj7h5a
[10:37:17] <Gargoyle> Essentially, that is what the app is doing. When it crashes apache.
[10:38:15] <Gargoyle> We've switched to using update() and there is the odd call to save() not yet cleaned up - didn't think it would do much harm if an item was updated and then saved right away.
[10:40:36] <Derick> Gargoyle: i meant, what did you run on the CMD?
[10:40:45] <Gargoyle> export USE_ZEND_ALLOC=0; php crashTest.php
[10:41:01] <Derick> ah, can you do: export USE_ZEND_ALLOC=0; valgrind php crashTest.php
[10:42:51] <Gargoyle> Derick: Looks gobble-di-gook to me! ;)
[10:42:54] <Gargoyle> http://pastie.org/private/debzxklqcc0lvbw5lrrq
[10:43:11] <Derick> not to me
[10:43:14] <Derick> it's found a bug
[10:43:22] <Gargoyle> \o/
[10:43:28] <Gargoyle> ;)
[10:43:45] <Derick> that's trunk?
[10:44:16] <Gargoyle> yeah. Built it last night around 10 from a fresh clone of github master branch
[10:47:30] <Derick> thanks for helping with that one
[10:47:36] <Gargoyle> no problem.
[10:48:04] <Gargoyle> We go live around 10pm, so you have plenty of time to fix it! ;)
[10:48:12] <Gargoyle> :P
[10:49:30] <Derick> hah
[10:50:24] <Gargoyle> Wonder why that particular doc upsets apache more than others.
[10:51:12] <Derick> i don't know, but could you dump that whole document for me too? (feel free to mail me if it's private info)
[10:53:29] <Gargoyle> Derick: I take it that DCC failed since it crashed xchat?
[10:53:38] <Derick> yes :-)
[10:54:03] <Gargoyle> /msg me your email addy
[10:54:36] <Derick> i think you can guess it :-)
[10:55:13] <Gargoyle> ha ha
[10:55:25] <Gargoyle> indeed. Almost as good as mine, that!
[11:05:26] <Derick> let me see if I can reproduce without repl set too
[11:07:20] <Derick> yup
[11:07:33] <Derick> will fix after lunch :-)
[11:08:16] <Gargoyle> I've been keeping you buzy this week! ;)
[11:12:52] <Zelest> Derick, off-topic, but as for mcrypt, what does the ECB, CBC, CFB, OFB, NOFB modes mean/do?
[11:25:34] <Zelest> nvm, found it. :-)
[11:34:37] <Derick> Zelest: different cipher chaining algorithms
[11:35:12] <Zelest> mhm, i found http://www.chilkatsoft.com/p/php_aes.asp which was useful to read
[11:41:44] <NodeX> Zelest, it's a very good idea to salt what you're encrypting too
[11:46:03] <Zelest> ok? correct me if I'm wrong, but the idea of salting is to make each set of encrypted data unique, right? for example, if two users use the same password, the two passwords encrypted should still be different, right?
[11:48:09] <NodeX> yes
[11:48:34] <NodeX> you can add the salt to either the crypt key or the password then encrypt
[11:48:42] <NodeX> then do the reverse on the ay out
[11:48:45] <NodeX> way out *
[12:03:00] <NodeX> anyone have an idea why db.collection.aggregate([{"$match":{"ymd":20120904}},{"$group":{"_id":"$comment","total":{"$sum":1}}},{"$sort":{"total":-1}}]); gives me the total count but if I drop the $match is give me comments + count ?
[12:03:10] <NodeX> it's driving me insane
[12:09:19] <Aartsie> Is there a tool for mongoDB like PhpMyAdmin ?
[12:09:48] <Derick> a few, yes
[12:10:30] <Aartsie> Derick: can you tell wiche one ?
[12:10:32] <Derick> there is a list: http://www.mongodb.org/display/DOCS/Admin+UIs
[12:10:46] <Aartsie> Derick: Thank you !
[12:10:52] <Derick> rockmongo is probably what you should try first
[12:10:59] <Derick> Gargoyle: found it, and fixed it
[12:12:54] <NodeX> Derick : is there some docs somewhere for the new aggregate() helper in the php driver?
[12:18:58] <Derick> NodeX: http://docs.php.net/manual/en/mongocollection.aggregate.php
[12:20:56] <NodeX> thanks, it's driving me insane why $match wont work on certain queries so I need to chekc i'm calling it correctly
[12:22:58] <Derick> NodeX: suggestions for improving this docs, please add a comment at https://jira.mongodb.org/browse/PHP-476
[12:24:47] <Derick> Gargoyle: the mongoid I don't have to redact, right?
[12:25:05] <Zelest> NodeX, mhm, I've just seen so many retarded solutions for salting where they use a static salt for all "rows" .. meaning it's just a little tricker to crack (as it's probably longer) but multiple "rows" still have the same encrypted data. :P
[12:27:52] <NodeX> I'll make some examples Derick because the examples are kind of hard to follow as they dont have sort and so on in them, once i have my head round it I'll add it all ;)
[12:28:37] <Derick> awesome!
[12:28:46] <NodeX> Zelest : use a salt per doc
[12:28:59] <Zelest> NodeX, indeed.
[12:29:37] <Zelest> NodeX, or make it "magic" by using more data for the salt (like, realname, uid, login, etc)
[12:29:45] <Zelest> or is that a bad idea?
[12:34:37] <NodeX> it's for brute force so the less common the better
[12:35:28] <Zelest> ah
[12:35:46] <Zelest> or.. wait
[12:36:41] <Zelest> if I generate a random string for salting and store that in the db.. or if I use the username for salting.. what difference does that make really? :P
[12:37:11] <Zelest> the idea, as far as I get it, is to protect the remaining data after one "row" is cracked.
[12:53:04] <Derick> Gargoyle: fixed in trunk
[13:06:00] <luqasn> hi there, i got a very strange problem with mongodb not persisting all properties of the DBObject I pass to the java driver, does anybody have any pointer on how I could diagnose the issue?
[13:06:29] <luqasn> it looks like it just ignores some of the fields, which is very strange
[13:09:15] <Gargoyle> Sorry Derick, Was grabbing some lunch. What was that about the ID?
[13:12:47] <NodeX> luqasn : can you perhaps turn your query into json and pastebin it ?
[13:13:01] <NodeX> it sounds like you're using or not using $set
[13:17:03] <Lujeni> hello - i want upgrade my replica set. Should i upgrade arbiter at first and then mongod ?
[13:17:10] <NodeX> I must say this pipeline approach to aggregation is a genious idea
[13:19:40] <m4rtijn> hi
[13:21:44] <m4rtijn> im using mongoid 3 with ruby 1.9.3 - I have a Thread object with a Array of participant ids.. how would I search for a Thread which has all participant_ids I give it
[13:26:35] <luqasn> thx NodeX, you were right, I forgot that my worker was accessing the same database, but with outdated classes via a mapper and so deleted fields on save()
[13:33:52] <DiogoTozzi> Morning guys
[13:36:47] <DiogoTozzi> Does anybody knows where can I get the development version 2.3?
[13:40:14] <Zelest> https://github.com/mongodb/mongo ?
[13:43:56] <m4rtijn> Im looking for something like:
[13:44:02] <m4rtijn> thread = MsgThread.where('participants.id' == thread_participants.participants.map{|participant| participant.id} )
[13:44:06] <m4rtijn> but working :)
[13:44:37] <NodeX> so you want every single id matched in your query to be returned?
[13:45:21] <m4rtijn> a thread has :: participants [ {:id => 1},{:id=>2} ]
[13:45:27] <m4rtijn> i need to find that thread
[13:45:48] <m4rtijn> but, i dont know how many participants a thread has.. can be 2, or 20
[13:46:09] <NodeX> I dont understand ruby, I can tell you how to do it on the shell?
[13:46:25] <m4rtijn> how?
[13:46:37] <m4rtijn> ah.. yes
[13:46:39] <NodeX> field:["id","id2","id3"]
[13:46:41] <m4rtijn> shell I would know as well
[13:46:53] <m4rtijn> I need mongoid syntax though..
[13:47:01] <m4rtijn> and im a bit weirded out :)
[13:47:10] <NodeX> can't help you there sorry
[13:47:14] <m4rtijn> np, thx anyway
[13:47:25] <NodeX> but that's the query you need from the mongo shell ^
[13:49:24] <cmex> somone tryed to do replica set wit hmongo 2.2?
[13:53:21] <Azoth> hi, anyone here have started a replica set with mongo 2.2 (on windows server)?
[13:53:55] <Azoth> having some issues starting the set
[13:55:34] <Azoth> no one ?
[13:56:14] <NodeX> keep calm
[13:56:18] <Azoth> :)
[13:56:22] <Azoth> always
[13:56:39] <NodeX> patience is a virtue
[14:00:09] <cmex> "exception: need most members up to reconfigure, not ok
[14:00:46] <cmex> Nodex : do u know this?
[14:01:00] <Derick> cmex: it's "you" in English
[14:01:05] <Azoth> excacly the same problem
[14:01:09] <Derick> and the message is as clear as it can be :)
[14:01:43] <cmex> getting "exception: need most members up to reconfigure, not ok " when trying to add memeber to replica set
[14:01:51] <cmex> Derick: ?
[14:02:01] <cmex> error 13144
[14:02:03] <Derick> right, so what does your set look like now?
[14:02:29] <cmex> did initialize to 1 member
[14:02:40] <cmex> and i want to add wit hrs.add another member
[14:02:44] <cmex> and getting this error
[14:02:49] <cmex> *with
[14:04:55] <cmex> do i need to do initilize to all members or just primary(master)
[14:04:57] <cmex> ?
[14:05:07] <Derick> paste the output of rs.getStatus() into a pastebin
[14:06:53] <cmex> i dont have any output for rs.status and not for rs.GetStatus()
[14:07:13] <Derick> sorry
[14:07:16] <Derick> rs.status()
[14:07:21] <cmex> ok
[14:08:04] <cmex> http://pastebin.com/tKG9qjtP
[14:08:36] <cmex> Derick:http://pastebin.com/tKG9qjtP
[14:08:44] <Derick> right
[14:08:52] <Derick> so, now you want to add a secondary, right?
[14:08:56] <cmex> yep
[14:09:03] <Derick> is the secondary running?
[14:09:08] <cmex> yes
[14:09:20] <Derick> on which host / port?
[14:09:21] <cmex> do you want its status?
[14:09:32] <Derick> yes, that'd be good too
[14:09:39] <cmex> its on another pc on organization's lan
[14:11:43] <cmex> on port 27017 he's pasting it right now
[14:11:52] <cmex> http://pastebin.com/1qcK6MPv
[14:11:55] <cmex> ok thats it
[14:12:08] <cmex> |Derick:http://pastebin.com/1qcK6MPv
[14:12:30] <cmex> Derick:http://pastebin.com/1qcK6MPv
[14:12:41] <Derick> looks
[14:12:42] <Derick> good
[14:12:58] <Derick> now in machine one, run: rs.addHost('hostname:27017');
[14:14:53] <Derick> (using the correct hostname of course)
[14:16:53] <cmex> TypeError: rs.addHost is not a function (shell):1
[14:17:42] <cmex> Derick: TypeError: rs.addHost is not a function (shell):1
[14:18:08] <Derick> it's just add(
[14:18:13] <Derick> ugh, i need to check before I write
[14:19:03] <cmex> "exception: set name does not match the set name host
[14:19:13] <cmex> uff its not my day :))
[14:20:32] <cmex> Derick:and if try tu put a mongoname its shows me" exception: need most members up to reconfigure, not ok"
[14:21:05] <Derick> cmex: you do need to make sure the setnames are the same on each started server
[14:22:32] <cmex> Derick : you mean this one ? -replSet <name here>
[14:22:35] <cmex> ?
[14:24:07] <Gargoyle> Can I get some lazyweb help? - http://pastie.org/4668405
[14:24:10] <Derick> cmex: yes
[14:24:19] <cmex> Derick:thanks now trying
[14:24:35] <Derick> Gargoyle: why $and?
[14:24:46] <Derick> $and is default
[14:25:11] <Gargoyle> Derick: I need the second two together.
[14:25:27] <Derick> you'd get that without $and too
[14:26:29] <Gargoyle> Derick: Would I get results with type = 'Feature' and legacy_blah = 0?
[14:27:23] <Gargoyle> actually. I think I want not!
[14:27:25] <Derick> not understanding
[14:27:40] <Derick> i would rewrite this as:
[14:28:11] <Derick> 'name' => $regex, 'type' => array( '$ne' => 'feature' ), 'legacy' => 1 )
[14:28:53] <Derick> or ,if you want not for the last two:
[14:29:05] <Derick> 'name' => $regex, 'type' => 'feature', 'legacy' => 0 )
[14:29:53] <Gargoyle> No they need separating. 1 sec
[14:30:13] <Derick> why?
[14:30:49] <Derick> Gargoyle: btw, does trunk work for you now too?
[14:32:02] <Gargoyle> I just need to search by name, but exclude the ones where type == 'Feature' AND legacy == 1
[14:32:23] <Gargoyle> Derick: Not recompiled. Can test it again later.
[14:32:42] <Derick> ( 'name' => $regex, 'type' => array( '$ne' => 'Feature' ), 'legacy' => array( '$ne' => 1 ) ) then
[14:33:11] <Gargoyle> no.
[14:33:52] <Gargoyle> My home made psuedo query language coming up...
[14:34:15] <Derick> ?
[14:34:42] <Derick> oh
[14:35:08] <Gargoyle> never mind. I need to negate the last two as a pair.
[14:35:41] <Gargoyle> This yields 30 or so results { type: {$ne: 'Feature'}, legacy_IsBusinessType: 1 }
[14:35:47] <NodeX> Gargoyle : did I see you mention your app was launching tonight at 10pm?
[14:36:00] <Gargoyle> I need to search the rest of the collection by name.
[14:36:05] <Gargoyle> NodeX: Been delayed
[14:36:13] <NodeX> was gonna say lol
[14:36:32] <Derick> ( 'name' => $regex, '$not' => array( 'type' => 'Feature', 'legacy' => 1 ) ) then
[14:37:26] <Gargoyle> that's what I have Derick, getting 0 results.
[14:37:35] <Derick> or, perhaps better:
[14:38:16] <NodeX> does $not work on it's own, I thought it was a meta operator against others
[14:38:39] <NodeX> "The $not meta operator can only affect other operators. The following do not work. For such a syntax use the $ne operator."
[14:38:43] <NodeX> yup, just checked
[14:40:12] <NodeX> you want somehting like this Gargoyle : $and: [ { type: {$ne : 1 }}, { legacy: { $ne: 51 } } ]
[14:40:14] <Gargoyle> this shows how tired I am!
[14:40:20] <NodeX> 51 -> 1
[14:41:07] <NodeX> putting it all together db.foo.find({'name'=>$regex, $and: [ { type: {$ne : 1 }}, { legacy: { $ne: 51 } } ]});
[14:41:38] <Gargoyle> NodeX: I need the reverse.
[14:42:18] <timeturner> how do I set a ttl on specific documents in a collection?
[14:42:21] <Gargoyle> { type: {$ne: 'Feature'}, legacy_IsBusinessType: 1 } gives 30 or so entries from the collection. I need to search the other few thousand by name.
[14:42:40] <timeturner> For example I want to remove a user that hasn't confirmed their account yet
[14:42:46] <timeturner> from the users collection
[14:42:59] <NodeX> you said you wanted name -> regex where the other 2 dont equal 1
[14:43:02] <Gargoyle> Making an 'ignoreMeForSpecialSearch': 1 field any min now -
[14:44:14] <NodeX> timeturner : http://docs.mongodb.org/manual/tutorial/expire-data/
[14:45:47] <timeturner> that would be for all documents though right
[14:47:09] <NodeX> read the docs about the index
[14:47:14] <NodeX> but yes in effect
[14:47:26] <NodeX> you can't do what you want with mongo without a queue or somehting
[14:47:46] <timeturner> a queue? how would I set that up
[14:48:00] <NodeX> kestrel / gearman or w/e
[14:48:12] <timeturner> or how do most people deal with this issue of users and tempusers
[14:48:37] <NodeX> remind them to activate via email!
[14:48:41] <NodeX> after a few delete them
[14:49:32] <timeturner> earlier I had two different collections, tempusers and users but then the atomic transactions across documents made it difficult so I decided to consolidate both to one users collection
[14:50:00] <NodeX> what are temp users in your situation?
[14:50:45] <timeturner> temp users are those who have just registered and have been sent a code to their email to confirm their account
[14:51:07] <timeturner> so the only difference between users and temp users (in terms of fields) is that tempusers have the code field
[14:51:19] <timeturner> and then I have to transfer them over to the users collection
[14:51:23] <timeturner> but that's a pain
[14:51:36] <timeturner> since that involves a two-phase commit type of thing
[14:51:51] <timeturner> so yesterday I decided to just put the code field in the users collection
[14:52:08] <timeturner> and check if the code field was there after retrieving the entire doc every time
[14:52:12] <timeturner> when the user logs in
[14:52:51] <timeturner> but now I don't know how to have the documents that are older than 24 hours automatically deleted without setting up another index on the code field and running through all of the docs without a cron job or something
[14:54:00] <NodeX> personaly what I would do if I did things in a temp way is store the user in redis/memcahce with an expiring key
[14:54:17] <NodeX> when the user comes to activate pull the data then insert iot
[14:54:18] <NodeX> it*
[14:55:42] <timeturner> and then persist the stuff in redis to ensure that I still have those users if redis has to restart?
[14:55:54] <NodeX> yeh
[14:56:02] <NodeX> redis persists anyway
[14:56:15] <timeturner> well currently I have it to persist nothing
[14:56:20] <timeturner> since it's all session data anyways
[14:56:29] <NodeX> I'm pretty suire you have to turn off persisting dont you
[14:56:35] <timeturner> yeah
[14:57:01] <timeturner> persisting causes a performance penalty does it not?
[14:57:03] <Gargoyle> ok. I'm going bananas now. is there something wrong with this: db.categories.update({ type: {$ne: 'Feature'}, legacy_IsBusinessType: 1}, {$set:{noautocomplete:1}}, false, true);
[14:57:08] <NodeX> I use redis for a similar thing, I store online users in it with their locations etc
[14:57:09] <timeturner> if I set everything in redis to persist
[14:57:22] <NodeX> timeturner : iirc it does a fsync once a minute
[14:57:27] <NodeX> similar to mongo
[14:57:37] <timeturner> arg
[14:57:46] <timeturner> hmm
[14:57:48] <NodeX> ask in #redis to confirm
[14:58:07] <NodeX> Gargoyle : what error
[14:58:29] <Gargoyle> non. I just dont get the new property in my 30 results.
[14:58:54] <NodeX> db.categories.find({ type: {$ne: 'Feature'}, legacy_IsBusinessType: 1}).count();
[14:59:25] <Gargoyle> 32
[15:00:25] <Gargoyle> nm
[15:00:33] <Gargoyle> rockmongo is being a dick!
[15:00:52] <Gargoyle> I really think I shoudl give up for the rest of the day!
[15:01:03] <NodeX> lol
[15:01:06] <NodeX> wrong query?
[15:02:29] <Gargoyle> Just needing to refresh the entire page before you see the new field!
[15:02:40] <NodeX> never used it
[15:03:11] <Gargoyle> NodeX: Don't start!
[15:03:32] <NodeX> shell or driver only for me1
[15:03:34] <NodeX> -1
[15:04:09] <Gargoyle> NodeX: It's evil. But quite handy for some things - so you keep it around!
[15:06:38] <NodeX> out of interest Gargoyle : how come you used apache with php and mongo
[15:06:58] <NodeX> most people who want speedy apps steer clear of apache
[15:25:31] <cmex> Derick: ok we have reinstalled the mongo and now..
[15:25:37] <cmex> Derick:http://pastebin.com/4uyJmew7
[15:26:05] <cmex> Derick:is it ok ?
[15:27:48] <Derick> reinstalled?
[15:28:07] <Derick> cmex: looks good
[15:28:13] <Derick> it should come out of recovery state in a bit
[15:28:33] <cmex> Derick: it takes time to come out from recovery?
[15:28:49] <Derick> some time, sure
[15:29:30] <cmex> Derick:ok so we'll wait :)
[15:34:01] <Derick> cmex: shouldn't take that long
[15:34:06] <Derick> what's showing up in the log
[15:34:15] <cmex> still recovering
[15:35:32] <Derick> what's in the logfile?
[15:39:25] <cmex> Derick:Wed Sep 05 18:15:36 [rsHealthPoll] replSet member xxx:27017 is now in state SECONDARY
[15:40:16] <cmex> Derick: and in rs.status its still recovering
[15:41:16] <cmex> Derick: last line in log : Wed Sep 05 18:40:40 [rsBackgroundSync] replSet not trying to sync from Master:27017, it is vetoed for 301 more seconds
[15:48:10] <cmex> someone?
[15:48:13] <cmex> derick?
[15:48:25] <Derick> sorry, I got to go now - meeting soon
[15:48:40] <cmex> good luck and thanks Derick
[15:52:43] <NodeX> cmex : you can pay for support with 1-gen
[15:52:46] <NodeX> 10gen
[15:52:56] <NodeX> I rmemeber you saying you liked to pay for apps
[15:52:59] <NodeX> and support
[15:53:04] <NodeX> remember *
[15:53:27] <cmex> Nodex: thanks i like your answers and help :)
[15:53:45] <NodeX> you can ring them and they can advise you
[15:54:25] <cmex> thanks cap
[15:54:26] <cmex> :)
[15:56:46] <cmex> somone knows why data from primary is not copied to secondary ? :( this is psatebin http://pastebin.com/27KSxLZ9
[15:57:01] <cmex> *pastein
[15:57:06] <cmex> *pastebin
[15:57:50] <cmex> NodeX: maybe this type try to help ?:L))
[15:57:54] <cmex> time
[16:04:48] <NodeX> me help ?
[16:05:03] <NodeX> dude if I could help you I would but I dont know the answer
[16:05:56] <cmex> NodeX:thanks anyway
[16:14:42] <Azoth> can I do manual find opereations in a secondary replica set ?
[16:16:06] <Azoth> for an unknow reasone (to me) can't do show collections in the secondary computer
[16:27:08] <enw> FWIW - Yesterday I had a bunch of questions related to auth issues querying from a ReplicaSet SECONDARY with no auth. The problem was.... the PRIMARY had auth enabled. No problem now that they're all noauth.
[16:27:18] <enw> Just sharing.
[16:28:59] <Azoth> is auth an option in replica set ?
[16:41:31] <juanjosegzl> hello
[16:42:01] <juanjosegzl> I have a doubt, is it possible to do batch upsert?
[17:17:26] <zanefactory> any tips for reducing replication lag? for some reason, i had a fail in my 3 node replica set, and the original primary is now a secondary and all hosts are reporting replication lag over 50k secs
[17:54:28] <Dr{Who}> a few times now I have had my secondary servers fall behind because the hardware is not as beefy as the primary but when this happens so far they have never recovered. They seem to fall behind and then ard not able to catch up even if all or most all traffic stops on the primary. Any ideas what could cause this?
[18:10:34] <R-66Y> is there a $size for associative arrays?
[18:10:44] <R-66Y> just to return how many keys an associative array has?
[18:11:51] <Dr{Who}> I recived a DR102 https://jira.mongodb.org/browse/SERVER-4890 but I am running 2.2.1-pre seems as if it was fixed but here it is at that point seems like replication stopped
[18:15:19] <Dr{Who}> looks like https://jira.mongodb.org/browse/SERVER-6816 is what happens after the DR102
[18:16:51] <Dr{Who}> hmm maybe fixed yesterday
[19:35:02] <eka> hi all... working with the new aggregation framework... it's possible to have nested arithmetic operations? like { $add: {1 , $add: { '$count', 1}}
[19:48:02] <crudson> eka: 1) try it and see what happens! 2) nesting $add doesn't make much sense; it can take an array of items, and any of which can refer to an existing attribute
[19:55:14] <Dr{Who}> E5-2690 e5-2650 or ?? for the best IO performance
[19:55:26] <Dr{Who}> err sry wrong channel
[19:57:04] <eka> crudson: was just an example on nesting... I want to substract the result from a mod
[19:58:00] <crudson> eka: it'll work :)
[19:58:10] <eka> crudson: thanks
[19:58:37] <crudson> eka: if you get totally stuck ask for help, but look at $project examples and it should click
[19:59:00] <eka> crudson: I was about to use $project :) thanks
[19:59:55] <jmar777> [ANN] Just launched my new blog built on (and focusing on) Node.js, MongoDB, etc.: http://devsmash.com/ </spam>
[20:11:33] <Bilge> Do sharding and replication solve the same kinds of problem?
[20:11:59] <eka> no
[20:12:34] <eka> Bilge: http://www.mongodb.org/display/DOCS/Sharding+Introduction
[20:14:14] <Bilge> So it's kind of like RAID0 and RAID1?
[20:14:20] <Bilge> One duplicates the data and the other shares it?
[20:14:40] <eka> Bilge: yes... you can use replica behind sharding
[20:15:00] <eka> Bilge: each shard node can be replicated just for security... read that link
[20:16:43] <Bilge> I don't think you mean security
[20:16:47] <Bilge> You mean integrity right?
[20:23:04] <eka> Bilge: right, sorry
[21:48:48] <Gavilan2> Can you put "documents" inside of other "documents"? Or to simulate that I need to put a document id inside of the first document?
[22:00:39] <wereHamster> Gavilan2: google 'mongodb embedded documents'
[22:07:12] <Gavilan2> thx
[22:10:30] <Bilge> Can a binary field be used as an index?
[22:10:49] <Bilge> I mean, can it be indexed and also used as an id
[22:19:48] <wereHamster> sure
[23:01:00] <fumduq> I'm getting repl lag when I upgrade to 2.2
[23:01:19] <fumduq> syncedTo: Tue Sep 04 2012 21:43:26 GMT+0000 (UTC)
[23:01:20] <fumduq> = 90979 secs ago (25.27hrs)
[23:01:40] <fumduq> idle, just rm'd the files on the secondary and resynced
[23:01:51] <fumduq> doesn't seem to be trying to catch up at all
[23:01:56] <fumduq> just counting up
[23:35:08] <EvanCarroll> this bug tracker is entirely too difficult to use
[23:35:12] <EvanCarroll> [ERROR] [Wed Sep 5 23:32:21 2012] MAKE TEST failed: PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'inc', 'blib/lib', 'blib/arch')" t/*.t t/threads/*.t
[23:35:16] <EvanCarroll> saving floating timezone as UTC at /home/ubuntu/.cpanplus/5.14.2/build/MongoDB-0.45/blib/lib/MongoDB/Collection.pm line 296.
[23:35:20] <EvanCarroll> Argument "zzz" isn't numeric in int at t/bson.t line 241.
[23:35:22] <EvanCarroll> t/bson.t ............ ok
[23:35:25] <EvanCarroll> t/collection.t ...... ok
[23:35:27] <EvanCarroll> That's with the version pushed out today.
[23:35:29] <EvanCarroll> t/connection.t ...... ok
[23:35:32] <EvanCarroll> t/cursor.t .......... ok
[23:35:34] <EvanCarroll> t/database.t ........ ok
[23:36:15] <EvanCarroll> After using this JIRA, I honestly think RT was probably not that shitty.
[23:41:07] <sirpengi> EvanCarroll: never had to administer JIRA, but I've liked the frontend (though I've only used mongo's)
[23:43:17] <EvanCarroll> I've even filed an issue before, and I can't figure out how to file an issue.
[23:43:23] <EvanCarroll> That said, I don't half care.
[23:43:53] <EvanCarroll> It's another proprietary issue/bug tracker I just don't undrestand how so many of them mis the mark on usability
[23:44:05] <EvanCarroll> RT was extremly useable, it was just ugly as a horse's ass.
[23:44:16] <EvanCarroll> I think CPAN just had an extremely old version.
[23:45:00] <EvanCarroll> I could never figure out why there was no Markdown in the comments, or some method of indicating you were replying to other people., or why they chose to make replys re-open tickets, or a few other things. But, it was really simple to use.
[23:45:35] <EvanCarroll> I had an issue last week with Mongo where I wanted to file a suggestion on the Ubuntu packaging, it took me like 3 minutes to figure out how it was supposed to be sorted.