PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 30th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:07:57] <VooDooNOFX> sssilver: update(data={"c.bb": 1234}) should update only that key
[04:43:46] <hicker> Is there any way to get a list of items that were changed when updating a document? Not natively, right?
[04:44:37] <cheeser> well, the only things change are what you tell mongo to change...
[04:46:09] <hicker> You're right, I guess it's sort of a dumb question :-P
[04:46:15] <cheeser> :D
[04:46:52] <hicker> I guess I'd have to get the original document first, then update, then compare
[04:50:32] <cheeser> if you're just doing one document, you can use http://docs.mongodb.org/manual/reference/method/db.collection.findAndModify/
[04:52:12] <hicker> Oh, hmm, does that give me the original document before updating?
[04:52:46] <cheeser> http://docs.mongodb.org/manual/reference/method/db.collection.findAndModify/#return-data
[04:54:12] <hicker> Thank you cheeser :-)
[04:54:16] <cheeser> np
[07:18:39] <reese> hi, I have problem with drop empty collection, Dropping collection failed on the following hosts: .... code: 16338, could you help me?
[07:19:46] <joannac> there should be an error message
[07:19:51] <joannac> reese: what was the error message?
[07:21:00] <reese> ns not found
[07:21:40] <joannac> sharded cluster?
[07:21:44] <reese> yes
[07:22:50] <joannac> go to that shard and check if the collection is there
[07:27:21] <reese> no that collection is not present on shard locally
[07:28:06] <__dan___> hi
[07:28:55] <reese> joannac, but why its not present on shard when router has information about existing this collection on shard?
[07:28:57] <__dan___> how can i figure out why my mongo query is so slow?
[07:29:08] <joannac> reese: I dunno, did someone drop it on the shard directly
[07:29:09] <joannac> ?
[07:29:23] <joannac> i can't tell you what happens on your system
[07:29:24] <reese> joannac, impossible
[07:29:33] <joannac> __dan___: .explain() at the end
[07:30:18] <joannac> reese: when was the collection created?
[07:30:32] <reese> long time ago
[07:31:03] <reese> joannac, could it be problem when we added new shard or remove some different shard?
[07:31:36] <reese> joannac, ok nevermind but how can i fix it?
[07:31:55] <joannac> fix what? the collection is gone
[07:32:16] <reese> joannac, but i still see this collection from router in collections list
[07:33:21] <joannac> reese: okay. I can tell you how to manually hack it but it's going to risrupt your cluster
[07:33:26] <joannac> *disrupt
[07:33:48] <joannac> do you need the collection name again?
[07:34:05] <joannac> was the collection sharded?
[07:34:58] <reese> ok joannac if it could disturb my cluster i will not do this :)
[07:35:12] <reese> joannac, thanks a lot for your explanation
[07:36:14] <__dan___> so explain() claims that it took 2 millis
[07:36:21] <__dan___> but actually its taking like 10 seconds
[07:36:24] <__dan___> :S
[07:37:01] <joannac> __dan___: repeatably 10 seconds?
[07:37:06] <__dan___> yep every time
[07:37:06] <joannac> what does the log entry say?
[07:41:35] <__dan___> http://pastebin.com/ekG2qRYz
[07:42:17] <joannac> __dan___: um, what does the mongod log say?
[07:42:23] <__dan___> o
[07:43:33] <__dan___> http://pastebin.com/J6vi88de
[07:43:57] <joannac> that's 3 seconds, but close enough :p
[07:44:15] <joannac> pastebin the explain
[07:44:57] <__dan___> http://pastebin.com/TmEreisF
[07:45:44] <joannac> that's not the same explain for the query in your log
[07:45:51] <__dan___> oh
[07:46:00] <__dan___> well i do a count() on the cursor before iterating
[07:46:03] <__dan___> and then do explain()
[07:46:15] <joannac> sigh
[07:46:16] <__dan___> is it only explaining the non-count bit
[07:46:45] <joannac> that makes no sense
[07:46:56] <joannac> you cannot do a count() on the cursor before iterating
[07:47:02] <joannac> what language are you coding in?
[07:47:04] <__dan___> apparently in php u can
[07:47:22] <joannac> get rid of the count
[07:47:29] <joannac> just do the query and the explain()
[07:49:28] <__dan___> http://pastebin.com/f45QV1Ua
[07:49:38] <__dan___> without the count()s it takes only 1 second in browser
[07:49:46] <__dan___> which doesn't really make sense to me
[07:49:59] <__dan___> since on cli count() is very fast
[07:50:19] <joannac> I have no idea what you are doing
[07:50:35] <__dan___> i have a collection 'minmatches'
[07:50:37] <joannac> your query is on the field "ma"
[07:50:50] <joannac> your explain shows it's using the index on T: -1
[07:50:58] <joannac> that makes zero sense
[07:51:07] <__dan___> oh because its sorting
[07:51:08] <__dan___> by t
[07:51:26] <joannac> there's no sort in your query in the logs
[07:51:53] <joannac> get rid of the sort
[07:52:27] <__dan___> uh it only is logging the count
[07:52:33] <__dan___> when i remove it it logs nothing
[07:52:53] <joannac> right, because only long running ops get logged
[07:53:15] <joannac> okay, so only running the count with the query, fast or slow?
[07:53:44] <__dan___> yeah its definitely the count that is slow
[07:53:53] <joannac> it's not the count that is slow!
[07:53:54] <joannac> argh
[07:54:02] <joannac> do you have an index on the "ma" field?
[07:55:40] <__dan___> nope
[07:55:41] <joannac> okay
[07:55:41] <__dan___> thats probably it
[07:55:42] <__dan___> i forgot i had that in the query
[07:55:42] <joannac> YES
[07:55:43] <__dan___> :)
[07:55:45] <__dan___> will adding an index on ma speed up count?
[07:55:52] <__dan___> aha
[07:55:53] <__dan___> indices have counts in them? or it has to iterate over everything still
[07:55:53] <joannac> when you do a query on the field "ma" and there is no index, you have to go through the WHOLE COLLECTION
[07:55:54] <joannac> one by one
[07:55:54] <__dan___> yeah ok, that makes sense
[07:55:56] <joannac> read every document into memory and check the "ma" field
[07:55:57] <joannac> that is SLOW
[07:55:57] <__dan___> right
[07:55:58] <joannac> when you have an index on MA, all you have to do is read the index, find all the entries where ma:true
[07:56:00] <joannac> that is FAST
[07:56:27] <__dan___> so it still has to traverse the whole index though?
[07:56:28] <joannac> no
[07:56:28] <__dan___> i mean that will certainly be faster
[07:56:31] <joannac> it only traverses the section that matches
[07:56:38] <joannac> i.e. all the entries when ma:true
[07:56:45] <__dan___> ok right, probably like 50% of entries
[07:56:46] <joannac> and not the ones where ma:false
[07:56:49] <joannac> right
[07:57:24] <joannac> If you haven't already, go do one of the MongoDB University courses
[07:57:33] <joannac> this is like very 101 stuff
[07:58:24] <__dan___> yeah sorry
[07:58:28] <__dan___> i am dumb ;0
[08:01:26] <__dan___> cant believe i spent so long trying to find the issue and i was thinking the query was something else :(
[08:01:38] <joannac> lol
[08:01:40] <joannac> happens to the best of us
[09:10:36] <reese> how can i prepare query like this? db.collection.update( { "reading_value": { $gt: 1000 } }, { $set: { reading_value: reading_value/1000 } } )
[09:10:42] <reese> this is correct? db.InputReading310.update( { reading_value: {$gt:100} }, { $set: { reading_value: {$multiply:[{$divide: [reading_value, 1000]},100]} } }, { multi: true } )
[09:16:33] <Sp4rKy> Hi. I'm trying to set failIndexKeyTooLong=false in config file (mongo 2.6)
[09:16:44] <Sp4rKy> but looks like mongod refuse to start whenever I add it
[09:17:01] <Sp4rKy> I add "setParameter: failIndexKeyTooLong=false
[09:17:08] <Sp4rKy> to mongod.conf
[09:46:03] <Claus> Hi all
[09:46:45] <Claus> Can anyone hel me
[09:46:51] <Claus> Can anyone help me?
[09:47:05] <Zelest> Doubt anyone can answer that without knowing your problem?
[09:47:20] <Claus> Ok ;)
[09:47:42] <Claus> I want to update subocs of a document
[09:49:50] <Claus> my document is { "name": "prova", "instance": [{"prova1": "c", "f":"b"}, {"prova1":"d", "f":"n"}]}
[09:50:42] <quattr8> is there any information out already on how easy it will be to migrate from 2.6 to version 3 of mongodb with wiredtiger compression?
[09:51:39] <Claus> i want to update more instance (subdocs) at the same time
[09:59:14] <lxsameer> hey guys, is there any thing like group_by for mongo ?
[10:41:48] <robopuff> Hi guys, I'm new to mongodb (and nosql at overall), and I'm trying to create an application that's gonna use this db. I've got a collection of items, which contains a pricing embedded document, and I need to filter items based on price, but currency can change. Currently pricing embedded document contains price value and a currency string. Got second document with currency exchange rates, but mongo >= 2.4 disabled usage of
[10:41:48] <robopuff> db inside $where - I've tested also functions, but mongo tells me that db is unknown. Any idea how to manage this?
[11:15:09] <joannac> Left_Turn: aggregation framework
[11:15:11] <joannac> oops
[11:15:19] <joannac> lxsameer: aggregation framework
[11:15:30] <lxsameer> jonasliljestrand: thanks
[11:15:59] <jonasliljestrand> i see what u did there ;)
[11:16:11] <jonasliljestrand> Really, no problem! ;)
[11:16:34] <joannac> Sp4rKy: refuses to start? what's the error?
[11:16:51] <joannac> heh, lol
[11:18:16] <Sp4rKy> joannac: nothing in log
[11:18:44] <Sp4rKy> I finally solved the issue by adding this option to the command line
[11:18:48] <Sp4rKy> (using etc/default)
[11:19:41] <joannac> okay
[11:19:48] <joannac> what were you putting in your config file?
[11:22:22] <joannac> actually, i don't think you can set it in a conf file
[11:22:57] <Sp4rKy> k
[11:22:59] <joannac> I hope you have a good reason for needing it
[11:24:22] <Sp4rKy> well, neither http://docs.mongodb.org/manual/reference/configuration-options/#setParameter nor http://docs.mongodb.org/manual/reference/parameters/ say that some of the setParameters can't be added to config file
[11:24:36] <Sp4rKy> so my first idea was "ok, let's put this config param in config file"
[11:24:48] <joannac> hm
[11:24:50] <Sp4rKy> but I'm fine with /etc/default file as well
[11:30:38] <joannac> Sp4rKy: shrug
[11:30:42] <joannac> i just tried it and it worked
[11:35:27] <Sp4rKy> hmm strange
[11:51:23] <djlee> Hi all, wondering if you could help me. I've never really used mongo's aggregation stuff, and to be honest, i usually sit inside an ORM rather than executing raw queries. I was to essentially get a list of user ids from a collection, grouped by "user_id", and the "rating" field summed (then i want to order desc on sum of rating later, but havent got to that point yet).
[11:51:28] <djlee> I've got this query so far: http://pastebin.com/CyzgEkfQ
[11:51:52] <djlee> it's returning the correct number of results, but the rating sum is always zero
[11:52:06] <djlee> am i doing something silly in that query?
[12:14:05] <StephenLynx> djlee
[12:14:06] <StephenLynx> you there?
[12:14:27] <StephenLynx> in the _id you can just _id: "$user_id"
[12:14:50] <StephenLynx> no need to set an object for it
[12:15:57] <StephenLynx> and I will have to see your model to understand what the rating is
[12:16:03] <djlee> Thanks StephenLynx. Good to know. just starting to understand the expression stages e.c.t. It's actually quite a nice structure once you get your head out of SQL
[12:16:10] <StephenLynx> yes
[12:16:32] <djlee> StephenLynx: turns out the ORM layer was setting the integers as strings in mongo, hence it was trying to sum strings, not int's
[12:16:49] <StephenLynx> it is very programmer friendly if your language supports json like javascript.
[12:17:01] <StephenLynx> hm, yeah. I had that problem once.
[12:17:05] <StephenLynx> but not because of that
[12:17:15] <StephenLynx> because I was passing strings where I should use ints.
[12:17:28] <StephenLynx> what is ORM?
[12:19:06] <djlee> StephenLynx: I'm using Eloquent, part of Laravel (A PHP framework). However it was my own fault, i presumed since i had set the validator to validate the field as an integer, it would validate the type too, but it doesn't. I just had to cast the integer manually
[12:19:23] <StephenLynx> ugh
[12:19:34] <StephenLynx> I would avoid frameworks and such. but thats just me.
[12:33:11] <kelt> hi #mongodb
[12:34:26] <kelt> on the docs it says not to use text search on production systems: http://docs.mongodb.org/v2.4/tutorial/enable-text-search/
[12:35:03] <kelt> but if you're not going to use it on productions systems... then why would you ever use it at all? it's not like we are going to use it on development system only...
[12:37:11] <StephenLynx> that is 2.4 docs. the current version still says that?
[12:37:46] <StephenLynx> no, it doesn't http://docs.mongodb.org/v2.6/core/index-text/
[12:37:54] <StephenLynx> back in 2.4 it was a beta feature.
[12:38:01] <StephenLynx> with heavy performance issues.
[12:38:03] <kelt> StephenLynx: oh... X_x
[12:38:14] <kelt> StephenLynx: my bad... so in 2.6 it is all good then?
[12:38:18] <StephenLynx> seems so.
[12:38:27] <StephenLynx> "Changed in version 2.6: MongoDB enables the text search feature by default. "
[12:39:06] <kelt> StephenLynx: ah, gotcha
[12:39:22] <kelt> StephenLynx: yeah, donno how I got on the 2.4 docs lol, it was a google search and I clicked on it :)
[13:55:47] <RoyK> hi all. seems last update of centos7 broke the mongo_* munin plugins, giving me a 'connection refused'. any ideas?
[13:56:09] <StephenLynx> hold on, let me check on my centOS vm
[13:56:21] <StephenLynx> oh wait
[13:56:29] <StephenLynx> munin plugins?
[13:56:34] <RoyK> yeah
[13:56:39] <StephenLynx> nvm, I don't have that.
[13:56:49] <StephenLynx> what are these?
[13:57:01] <RoyK> https://github.com/munin-monitoring/contrib
[13:57:13] <RoyK> that sort of thing - for monitoring all sorts of stuff
[13:57:21] <RoyK> mongodb included
[13:57:34] <StephenLynx> hm
[13:57:52] <StephenLynx> yeah, no idea.
[15:06:40] <robopuff> Hi guys, I've got a problem - I have to filter collection items based on min/max price that user put, but he can put in USD and item has price set in GBP. Since mongodb has disabled "db" in where query, what are the possibilities to do such a thing?
[15:23:15] <robopuff> http://stackoverflow.com/questions/28239005/mongodb-calculating-currency-exchange-on-the-fly
[16:52:20] <roadrunneratwast> does mongo use up more storage than an SQL db?
[16:52:47] <cheeser> that's ... an odd question
[16:53:35] <roadrunneratwast> thanks
[16:53:54] <roadrunneratwast> probably a dumb one
[16:53:56] <roadrunneratwast> skip it
[16:54:09] <neo_44> roadrunneratwast: why did you want to know?
[16:54:10] <StephenLynx> I don't really know, but I would guess it doesn't.
[16:54:30] <StephenLynx> it has much less metadata.
[16:54:32] <roadrunneratwast> well, i guess is optimization
[16:54:37] <roadrunneratwast> emumerations
[16:54:53] <roadrunneratwast> i have an array of DETAILS
[16:55:05] <roadrunneratwast> in SQL, i could do an ENUM or a SET
[16:55:06] <neo_44> roadrunneratwast: there is no reason to compare SQL and NoSql...they are used for different things
[16:55:23] <roadrunneratwast> but in NoSql I am going to have to store an array of strings
[16:55:41] <neo_44> possibly....not positively
[16:55:47] <neo_44> depends on your access pattern
[16:55:54] <roadrunneratwast> explain
[16:56:03] <roadrunneratwast> and also what do you mean they are used for different things
[16:56:04] <roadrunneratwast> ?
[16:56:18] <neo_44> Mongo is a Document storage engine
[16:56:35] <neo_44> so if you want to store the object in your code exactly the same...use mongo
[16:57:06] <neo_44> but you can still break up the object from your code into different collections, databases, etc in mongo
[16:57:10] <StephenLynx> or if your priority on performance is higher than it is on relations and fault-proof
[16:57:12] <neo_44> really depends on the use case
[16:57:29] <neo_44> mongo is fault tolerant...out of the box
[16:57:33] <neo_44> great point
[16:57:42] <roadrunneratwast> ok
[16:57:43] <StephenLynx> does it has transactions?
[16:57:44] <neo_44> it also scales very easy
[16:57:47] <neo_44> no
[16:57:50] <StephenLynx> see
[16:57:54] <neo_44> mongo no transactions...
[16:58:03] <neo_44> so if you need them use a SQL database
[16:58:06] <roadrunneratwast> it's a simple crud application
[16:58:08] <StephenLynx> as I said
[16:58:16] <neo_44> mongo isn't great at aggregations...though it is getting better
[16:58:22] <roadrunneratwast> oh
[16:58:29] <StephenLynx> mongo is less concerned with faults.
[16:58:42] <StephenLynx> it is more concerned with performance
[16:58:46] <roadrunneratwast> ok
[16:58:53] <neo_44> roadrunneratwast: i use a data access layer in every application... so my application is talking to mysql, mongo, elastic search
[16:58:57] <neo_44> and any other engine i need
[16:59:02] <cheeser> StephenLynx: i don't think that's true at all.
[16:59:14] <StephenLynx> it doesn't even have transactions, cheeser.
[16:59:22] <neo_44> mongo isn't concerned with ANY thing...it allows the application to worry
[16:59:24] <cheeser> that's a different question.
[16:59:28] <roadrunneratwast> ok
[16:59:44] <StephenLynx> they obviously didn't designed with "no data can be lost" in mind
[16:59:45] <neo_44> mongo is powerful...but with that power comes complexity in the application layer
[16:59:55] <cheeser> StephenLynx: it's strongly consistent
[17:00:18] <neo_44> StephenLynx: that is my point...the application layer should have a retry for data lose
[17:00:45] <neo_44> mongo is only concerned with the data that makes it from the application to the database
[17:00:45] <neo_44> after mongo ahs it....it isn't going anywehre
[17:01:12] <neo_44> but Mongo is not the only database you should use in any application
[17:01:20] <StephenLynx> ok, neo, so you said the application should be responsible for it.
[17:01:31] <neo_44> with mongo , yes
[17:01:32] <StephenLynx> so are you saying that mongo is not concerned with it?
[17:01:42] <cheeser> what is "it?"
[17:01:43] <neo_44> it isn't concerned with data it hasn't recieved
[17:02:03] <StephenLynx> transactions are not just about that.
[17:02:19] <neo_44> sure...they affect race conditions
[17:02:19] <StephenLynx> what if you need to rollback an operation because another operation error'd?
[17:02:38] <cheeser> that's an application concern with mongo. typically.
[17:02:45] <neo_44> they you should make you application smart enought to handle it
[17:02:49] <neo_44> your*
[17:02:49] <StephenLynx> so mongo is not concerned with it?
[17:02:53] <neo_44> no
[17:03:02] <StephenLynx> so you are conceding the point I made?
[17:03:04] <cheeser> the quick answer is to use embedded docs and a single update though that isn't always possible or advisable
[17:03:16] <cheeser> StephenLynx: when you say "it," what do you mean?
[17:03:21] <neo_44> StephenLynx: with mongo you shouldn't have those dependencies
[17:03:33] <StephenLynx> but that is the whole point I made.
[17:03:42] <StephenLynx> mongo is not concerned with data integrity.
[17:03:47] <neo_44> yes it is
[17:03:48] <cheeser> yes, it is.
[17:03:54] <StephenLynx> it doesn't have transactions.
[17:03:57] <neo_44> doesn't need them
[17:03:59] <cheeser> it just doesn't support multidocument transactions.
[17:04:02] <neo_44> it is not relatiional
[17:04:09] <cheeser> those are two vastly different things.
[17:04:13] <StephenLynx> it doesn't need because it is not designed to need it.
[17:04:20] <StephenLynx> because it is not concerned with it.
[17:04:27] <StephenLynx> thats what I said.
[17:04:36] <cheeser> if by "it" you mean relational integrity, yes.
[17:04:49] <cheeser> but you still refuse to clarify "it."
[17:05:02] <StephenLynx> data integrity.
[17:05:19] <StephenLynx> transactional integrity.
[17:05:19] <cheeser> mongo cares about data integrity
[17:05:26] <neo_44> 100%
[17:05:28] <cheeser> it doesn't support transactions.
[17:05:36] <cheeser> which has been said a thousand times now.
[17:05:37] <neo_44> doesn't need transactions
[17:05:43] <cheeser> you're conflating two concerns
[17:05:48] <cheeser> at least two
[17:05:50] <neo_44> lol
[17:09:13] <Jorge_> Hello
[17:09:20] <roadrunneratwast> ola
[17:12:07] <neo_44> roadrunneratwast: what would you like to do? I can give you an example in mongo...that is optimized
[17:12:20] <roadrunneratwast> neo_44
[17:12:29] <roadrunneratwast> neo_44: you are a prince among men
[17:12:45] <roadrunneratwast> i am still hacking and will post later. i am a n00b to mongo
[17:13:08] <roadrunneratwast> thanks for being a pal
[18:45:19] <tbo_> does anyone know if it's possible to populate mongoose references on save? rather than when they're queried?
[19:56:19] <roadrunneratwast> Hi I am a mongo noob. Can folks give me feedback whether this: http://plnkr.co/edit/ANr7Fy53yTHrE8624lgt?p=catalogue seems like a reasonable backend model for this wizard: https://guttersnipe.herokuapp.com/#/resources/wizard/start
[19:57:38] <roadrunneratwast> Should I worry about modeling the TAXONS (type is "food", "medical", "housing") as strings? It seems wasteful? Or am I just worrying too much about nothing?
[20:06:15] <_newb> i'm saving a string with non-standard characters to mongodb and it's storing just fine: “A Leslie Knope In A World Full Of Liz Lemons” recommended by Medium Staff
[20:06:43] <_newb> but when i try to construct an email with this string in the message, the characters are transformed into goofy ones ... any suggestions?
[20:08:25] <roadrunneratwast> Are you sure about character encoding?
[20:09:04] <_newb> roadrunneratwast: no, what do you suggest? please
[20:09:07] <roadrunneratwast> That would be my guess. You might need to encode or decode the string to the appropriate format
[20:09:17] <roadrunneratwast> I am even more of a noob than you are
[20:09:52] <_newb> roadrunneratwast: even if it stores (and looks GREAT) in mongodb?
[20:10:15] <roadrunneratwast> Well. It could be base64 encoding and you might need a different encoding
[20:11:05] <roadrunneratwast> I ran into this problem in a different context and had to do just run a simple decoding function
[20:12:18] <_newb> roadrunneratwast: the original content looks like this: Subject: =?UTF-8?Q?=E2=80=9CA_Leslie_Knope_In_A_World_Full_Of_Li?= =?UTF-8?Q?z_Lemons=E2=80=9D_recommended_by_Medium_Staff?=
[20:12:48] <roadrunneratwast> and then what happens ?
[20:15:36] <_newb> roadrunneratwast: mongodb stores it the right way, but when i insert it into a message it gets all weird
[20:15:40] <roadrunneratwast> most likely the string was mangled somewhere along the way. either you can try to fix the way it was transmitted or try to reencode it.
[20:16:02] <_newb> roadrunneratwast: okay, i guess i was pointed in that direction, just gotta figure that out
[20:16:14] <roadrunneratwast> does the message support UTF 8 encoding?
[20:16:27] <roadrunneratwast> maybe you need to convert it to ASCII.
[20:16:38] <roadrunneratwast> Again, these are just guesses on my part.
[20:19:38] <iksik> hello
[20:20:58] <iksik> is it possible to maintain unique index across all subdocuments?
[20:20:59] <_newb> roadrunneratwast: it's phpmailer
[20:21:30] <roadrunneratwast> what does the string look like now?
[20:22:29] <roadrunneratwast> I know I was crawling an gmail inbox and I had to deal with base64 encoding
[20:24:19] <roadrunneratwast> https://www.base64decode.org/
[20:36:48] <_newb> roadrunneratwast: that stackoverflow i pasted earlier seems like it's a way-to-go
[20:36:58] <_newb> roadrunneratwast: thank you for digging thru this with me
[20:37:05] <roadrunneratwast> okey doke
[20:37:06] <roadrunneratwast> good luck
[20:37:33] <_newb> roadrunneratwast: ;) thank you. happy friday.
[20:37:40] <roadrunneratwast> u 2
[21:28:00] <GothAlice> joannac: I just ran an experiment and told Exocortex to give me the oldest things ever stored. "Blowing up the Whale.AVI" from Jan 29, 1999 is the oldest (originating from when I *first* started coding up the project) and http://cl.ly/2A1h2K0P1g1H (I believe from AdCritic) is the third oldest thing circa May 2000. My dataset can legally vote in some places. :D
[21:33:52] <cheeser> oh, adcritic. i loved that site.
[21:34:44] <GothAlice> Yeah. T_T Sad victim of copyright.
[21:35:15] <cheeser> sadly, now it's all on youtube.
[21:36:57] <GothAlice> T'was a battle that couldn't be won, but there were still casualties along the way. :/
[22:00:09] <kexmex> how is Munin compared to Graphite+Diamond?
[23:38:33] <zzzzz> Im having trouble trying to install latest version of mongodb on Debian Jessie, it fails with http://pastebin.com/kmyunjEc
[23:39:58] <zzzzz> dpkg --configure - a doesn't fix it, nor does the latest version