PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 17th of April, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:11:39] <macwinner> hi, any pointers on good example of how to organize mongoose model code where some of the models have subdocuments?
[09:27:27] <macwinner> does wiretiger compress gridfs collections?
[09:32:57] <nixstt> I keep getting session.commit_transaction: memory allocation: Cannot allocate memory on a 4gb ram vps, I set wiredtiger to only use 2GB ram, nothing else runs on this server
[09:34:04] <nixstt> Even if I set it to only use 1GB (which seems to be the min.) it still happens
[10:11:29] <pamp> hi
[10:12:41] <pamp> I am to have low performance in a cluster in the azure, I must move the cluster to AWS?
[10:12:53] <fontanon> Hi everybody ... is there a way to keep my unique keys in a collection when sharding it? Mongo complains because in order to keep unique keys that keys must figure in the shard key.
[10:13:19] <pamp> performance will improve in the AWS cloud?
[10:30:44] <joannac> pamp: um, no. figure out why the performance is bad.
[10:31:01] <joannac> i don't think it'll be a azure vs aws problem
[10:31:26] <joannac> fontanon: http://docs.mongodb.org/manual/tutorial/enforce-unique-keys-for-sharded-collections/
[10:31:51] <fontanon> joannac, let me have a look, thnks
[10:33:02] <fontanon> joannac, I like the option 3
[10:34:32] <kmtsun> hello everyone
[10:35:16] <fontanon> joannac, how can I those unique identifiers? My unique keys are generated with chance.guid http://chancejs.com/#guid
[10:35:30] <fontanon> guarantee* I meant
[10:53:58] <arussel> I have a replicat set with M1 (primary), M2 (secondary) and M3 (arbiter). Can I assume that if my application connect exclusively to M1, then it will work as expected
[10:54:12] <arussel> but if it connect exclusively to M2 it will fail
[11:14:05] <cheeser> arussel: if you connect directly to M2, the driver will find its primary and connect to that one as well.
[12:21:23] <pamp> .
[12:22:23] <arussel> cheeser: thanks
[13:15:34] <nixstt> My mongodb replicaset crashes right after the startup status with out of memory in the logs
[13:15:54] <nixstt> its a 4gb vps with wiredtiger 3.0.2 and cache size set to 1G
[13:20:38] <arussel> what is wrong with: 'journal.enabled=false' in 2.6 ?
[13:20:51] <arussel> I get: Error parsing INI config file: unknown option journal.enabled
[13:26:51] <cheeser> in the yaml version?
[13:26:56] <cheeser> http://docs.mongodb.org/v2.6/reference/configuration-options/
[13:27:14] <cheeser> perhaps this is what you want: http://docs.mongodb.org/v2.6/reference/configuration-options/#storage.journal.enabled
[13:28:31] <fatmcgav> afternoon all... I'm trying to set-up MongoDB using Puppet, however I think I've found an issue with the Mongo-Shell which is making life difficult...
[13:29:26] <fatmcgav> i'm trying to use 'db.runCommand' to createUser, but am getting a 'auth failed' response... however the exit code is 0
[13:29:48] <cheeser> http://docs.mongodb.org/manual/reference/method/db.createUser/
[13:31:12] <nixstt> cheeser: how much memory would I need for wiredtiger in 3.0.2? I always ran mongodb on 512mb instances with no problem
[13:31:35] <nixstt> now when I try to add a replicaset member (4gb ram) it fails after startup2
[13:33:52] <fatmcgav> @cheeser: I've had a read of the db.createUser doc already, and I know the create is correct... I've got some permission issue that I'm trying to track down...
[13:34:09] <fatmcgav> however the issue at hand is around what appears to be an incorrect exit-code from mongo shell...
[13:34:12] <fatmcgav> see: https://gist.github.com/fatmcgav/e21f1a32399a782152ba
[13:37:08] <fatmcgav> i'd expect a non-zero exit code on the last command...
[13:43:08] <[diecast]> hey everyone, i'm new to mongodb and have a question about how we're currently setting up our database
[13:43:27] <[diecast]> the collections are deleted, then new ones are created and documents are inserted
[13:44:05] <[diecast]> it occurred to me that this might not be the best approach but wanted to ask if that is a common practice?
[13:45:49] <StephenLynx> why are the collections deleted?
[13:46:10] <StephenLynx> what are your requirements that lead to this practice?
[13:49:09] <[diecast]> from what I understand the collections are deleted for simplicity on the mongo management scripts
[13:49:35] <[diecast]> so that the persons who set it up only needed to have lists of db/collections/documents and run shell scripts to iterate over them
[13:50:05] <[diecast]> so all applications are stopped first, then the database is basically re-installed
[13:50:15] <StephenLynx> what
[13:50:21] <StephenLynx> why
[13:50:25] <[diecast]> yes, very bad
[13:50:38] <StephenLynx> yeah, whoever designed that was on crack.
[13:50:42] <StephenLynx> that is not a common practice.
[13:51:23] <StephenLynx> you don't use a db as a temporary file.
[13:51:56] <[diecast]> i've created some ansible tasks that instead look to see if the collection exists and then will inspect the documents to see if they are different from what the new document contains
[13:52:27] <[diecast]> if the collection doesnt exist then it will be created, does that sound right?
[13:53:41] <StephenLynx> no need fo that.
[13:53:57] <StephenLynx> if you try to interact with a non-existent collection, mongo will create it.
[13:54:41] <StephenLynx> and if you want document consistency, it is better to just check it manually for a given pattern.
[13:54:56] <StephenLynx> instead of comparing with existing data.
[13:55:24] <[diecast]> that's cool about the collection creation.
[13:55:51] <[diecast]> i'm not sure what manually checking for a pattern would look like
[13:56:12] <[diecast]> i would like this to be programmatic
[13:57:04] <StephenLynx> of course it would be programmatic.
[13:57:19] <StephenLynx> by manually I mean "write your own code".
[13:57:32] <[diecast]> oh, ok ;)
[13:58:01] <[diecast]> what method do I use in mongo to do this kind of check?
[13:58:42] <StephenLynx> afaik, there isn't.
[13:58:49] <StephenLynx> mongo doesn't care about document consistency.
[13:58:58] <StephenLynx> thats where you write your own code that checks for it.
[13:59:00] <[diecast]> ok, understood
[14:00:49] <carif> ping
[14:01:03] <carif> i recently upgraded the ubuntu 'mongodb-org' meta package from 2.8 to 3.0.2 using mongodb's package repo 'http://repo.mongodb.org/apt/ubuntu/dists/trusty/'. As I understand it, the mongo configuration file format is now yaml, but my /etc/mongod.conf is still in the older ini format.
[14:01:43] <deathanchor> hmm... mongo cfg servers don't need much resources right? What's usually the first bottleneck for a cfg?
[14:01:53] <carif> Does 3.0.2 use the yaml format?
[14:02:16] <nixstt> carif: both formats worked for me but I just changed to yaml
[14:12:21] <carif> nixstt, by "change to yaml" do you mean you transcribed the contents of /etc/mongod.conf into a new mongod.conf that was in yaml format? In other words, you did it yourself?
[14:12:38] <nixstt> yes
[14:12:49] <nixstt> http://dba.stackexchange.com/questions/82591/sample-yaml-configuration-files-for-mongodb
[14:14:05] <carif> vg, ty, good pointer; I'm still hoping to confirm if the mongodb guys did it for me
[14:19:24] <carif> i just broke apart the .deb, it arrives in the old format
[14:20:45] <nixstt> I was wondering as well it didn’t ask me to replace the config file when I upgraded
[14:21:04] <nixstt> I replaced it myself I like the yaml format, wasn’t that hard to do
[15:16:03] <naiquevin> Hi, I am facing a problem where on dropping a database, the result says that it's dropped but the db is still there although it's shown as (empty). Mongodb version 2.4.12. Any ideas?
[15:19:49] <cheeser> sharded or no?
[15:24:56] <naiquevin> not sharded. To provide more context, dropDatabase is issued in the "teardown" of a test suite.
[15:39:09] <cheeser> naiquevin: are you issueing any commands aganst that database after? what are you doing to see that it's still there?
[16:23:10] <dberry> what is the correct order for upgrading from 2.4 to 2.6 in a sharded replica set?
[16:23:59] <dberry> can I upgrade the secondaries first or do I need to go mongos, configsvrs, then mongod
[17:14:00] <gothos> Hello, anyone here familiar with casbah? I'm trying to extract data from an array that has some object in it, it looks like this: "perfData" : [ { "pi" : "3.1" , "minute" : "3"}, ... ]
[17:14:31] <gothos> I'm getting a MongoDBList but can't find a way to get the object within, any recommendations? :)
[17:39:02] <itisit> anyone has good example of puppet modules for deploying mongo which let multi instances can be added to replica set?
[17:50:47] <wavded> I have three mongo 2.6.x boxes that we are going to retire and use three new 3.0.x boxes. Do I need to upgrade the old ones to 3.x or can I replicate and make the new 3.x boxes the PRIMARY and then shut down the others?
[17:53:17] <cheeser> you can just replicate
[17:53:53] <wavded> cheeser: thx much!
[17:54:01] <cheeser> np
[17:59:43] <deathanchor> I have to do that from 2.2 to 3.0 for a sharded cluster :( first to 2.4, then 2.6, then 3.0
[18:00:19] <cheeser> mms automation makes that painless. of course, i don't think 2.2 is supported in automation...
[18:00:26] <deathanchor> it isn't
[18:01:08] <deathanchor> eh, it's a rolling upgrade that will take a while
[18:06:50] <cheeser> if you can get to 2.4 on your own, mms can take you the rest of the way.
[18:49:36] <revoohc> anyone have there mongodb.conf file renamed to mongod.conf during 2.4->2.6 upgrade?
[19:00:10] <int3nsity> hi guys =), got a question. I'm doing the model for a db. Employers have jobs. Don't know if nest it in the same collection or do another collection for Jobs, with Id directing to the employer
[19:00:28] <int3nsity> ***employer give jobs
[19:03:05] <cheeser> jobs should probably go in their own colleciton
[19:05:04] <int3nsity> i've been told that the denormalization option reads faster
[19:05:14] <int3nsity> how to know when is beter normalization or denormalization?
[19:05:36] <StephenLynx> that depends on how you will handle the query.
[19:05:50] <StephenLynx> the less requests you have to do to the db, the faster, usually.
[19:06:00] <int3nsity> it's better practice to normalize?
[19:06:02] <StephenLynx> no
[19:06:07] <StephenLynx> much on the contrary.
[19:06:10] <cheeser> but you also have to consider lifecycles and sizes of your data.
[19:06:25] <StephenLynx> you normalize on the exceptions.
[19:06:35] <StephenLynx> for example:
[19:06:47] <StephenLynx> I had forums and threads and posts.
[19:06:47] <cheeser> a document can only grow to 16MB so you want ot make sure you won't outgrow that if you embed documents.
[19:07:00] <cheeser> there are also querying and usage to consider.
[19:07:17] <StephenLynx> because of the limit and querying sub-arrays has its own limitations, I created separate collections for threads and posts.
[19:07:32] <StephenLynx> but forums also has a list of mods, settings and another things.
[19:08:04] <StephenLynx> since these I don't need to read just a few and I dont expect to have more than 10 or so on each
[19:08:10] <StephenLynx> I kept these as sub-arrays on forums.
[19:08:40] <GothAlice> StephenLynx: The only limit you'll encounter on those sub-arrays is when there are more than 3 million words of replies. (That's a lot of words.) Embedding replies within the thread is one of the most interesting cases of denormalization and optimization for data locality, I feel.
[19:08:55] <StephenLynx> that was just one of my reasons.
[19:09:00] <GothAlice> (And when you hit that limit, it's pretty trivial to add a special type of reply to the end that links it to a new thread.)
[19:09:11] <StephenLynx> I usually want to just read some of the data
[19:09:19] <StephenLynx> and with threads I want to sort them too.
[19:09:38] <StephenLynx> doing that with sub-documents seemed to be worse than with separate collections.
[19:09:57] <GothAlice> Except suddenly you have way more queries to issue in order to render any given page.
[19:10:03] <StephenLynx> no.
[19:10:11] <StephenLynx> no more queries.
[19:10:37] <StephenLynx> I worked with that in mind so I wouldnt have to perform any additional queries. I didn't took that decision ligthly.
[19:11:09] <StephenLynx> it was after some deliberation that I refactored it to use separate collections instead of sub-arrays.
[19:11:55] <StephenLynx> keeping in mind that I didn't used an ODM nor mongo's field reference, so there are no hidden queries.
[19:12:01] <GothAlice> User requests page three of the replies to thread X in forum Y. To render the page you need to: load the forum, load the thread, load the paginated replies. If you've denormalized the details that will be rendered for any given thread into the replies you've duplicated a fair amount of data, and must issue additional updates to keep those "caches" updated. If you didn't, well, three queries.
[19:12:10] <StephenLynx> nope.
[19:12:22] <StephenLynx> this is what I do:
[19:12:47] <StephenLynx> I take the forum, the thread and the page
[19:13:03] <StephenLynx> my posts duplicate both of the forum and thread they belong to.
[19:13:21] <StephenLynx> so I don't have to load the forum, I look directly for the posts I am looking for.
[19:13:35] <StephenLynx> one query.
[19:14:04] <StephenLynx> and since the forum and thread identification never change, I don't have to update anything to keep track of that.
[19:14:20] <StephenLynx> want to see my code where I list posts?
[19:14:59] <StephenLynx> ah, theres another detail you are forgetting.
[19:15:18] <StephenLynx> my back-end is purely MVC. It just outputs json.
[19:15:45] <StephenLynx> I will never output both the forum along with the posts.
[19:17:05] <StephenLynx> Ah, indeed I do a second query for the thread data. Because when the user refreshes it, thread data like time of last post changes.
[19:18:30] <StephenLynx> but using unwind to handle the thread posts was not possible because a thread can have zero posts.
[19:21:13] <StephenLynx> so anyway I would need to make a second query to get the thread data, since I don't have a guarantee it will have posts and will output its own data.
[19:22:12] <StephenLynx> so yeah, while using fake relations for everything is suicidal with mongo, there are pretty valid cases for it.
[19:44:06] <int3nsity> it's better to get to know mongo or go for a DB solution like dynamoDB?
[19:44:12] <int3nsity> pros/cons?
[19:44:48] <cheeser> you should learn mongodb
[19:45:39] <int3nsity> yeah i get that, but what are the pros/cons of db solutions like dynamoDB
[19:47:11] <StephenLynx> what is dynamoDB?
[19:47:38] <StephenLynx> and maybe you can get more informations about dynamoDB on a place that is dedicated to dynamoDB.
[19:48:24] <StephenLynx> ah, its from amazon.
[19:48:32] <StephenLynx> I wouldn't touch it with a mile long pole.
[19:48:39] <DeliriumTremens> you could ask in ##aws
[19:49:23] <StephenLynx> I bet its even proprietary.
[19:51:18] <StephenLynx> omg, you are not even able to use it outside amazon's services.
[19:51:23] <StephenLynx> what a pile of junk.
[19:51:52] <StephenLynx> yeah, int3nsity, if you don't want to be amazon's bitch, don't use it.
[19:52:45] <StephenLynx> for a rule of thumb, if you depend on a certain company's service to use a certain product, you should not use the product. It is designed just to keep you hooked to the company's service.
[19:53:15] <StephenLynx> and then obviously the product is proprietary.
[19:53:25] <StephenLynx> so they could literally do anything to it and you wouldn't even know.
[19:56:20] <StephenLynx> not to mention it will never be as good as software that is created by the community's contribution.
[20:31:01] <pjammer> can you control resident memory
[20:31:47] <sadmac> pjammer: like... with his mind? XD
[20:32:08] <sadmac> s/his/your/
[20:32:31] <pjammer> sortof. More like is it just a calcuation or is there a setting that say "grab it all son!"
[20:32:47] <sadmac> don't know then
[20:33:02] <pjammer> maybe i'm too far down the rabbit hole for what i need to do.
[20:38:52] <PedroDiogo> whats your guys input on Azure? I'm eligible for Microsoft's bizpark program, so I'm thinking of using one of their VPS to deploy MongoDB using MMS
[20:49:47] <StephenLynx> my forumhub is up on google play :3 https://play.google.com/store/apps/details?id=com.cheshire.lynxhub
[20:49:56] <StephenLynx> if you guys want to check it out, I use mongo for it.
[21:32:16] <krisfremen> PedroDiogo: more problems than it's worth it.. at least in my experience
[21:32:37] <krisfremen> PedroDiogo: AWS is much more stable and straightforward
[21:38:57] <PedroDiogo> what kind of problems? did you use it with MMS ?
[21:39:09] <PedroDiogo> tks krisfremen
[21:47:20] <krisfremen> PedroDiogo: not with mongo, with couchdb and mysql, it doesn't really deliver on performance. this was about 6-7 months back. stuff could've changed by now though. cloud is moving faster than I can keep up
[21:47:51] <krisfremen> performance was subpar compared to gce and ec2
[21:49:00] <krisfremen> ram and cpu were the issue. for disk the difference wasn't as great
[21:49:11] <PedroDiogo> hm, ok! tks for the input! ;)
[21:49:25] <PedroDiogo> i think it is actually cheap when it comes to GB/$
[21:49:47] <krisfremen> it was the cheapest at the time, iirc
[21:49:51] <krisfremen> it could still be
[21:49:57] <krisfremen> and with bizspark, it might be even cheaper
[21:51:10] <PedroDiogo> yeah, i'm still waiting to know what discount I'll have, but the regular price is not that expensive https://azurevps.com
[21:51:21] <PedroDiogo> will probably use it with for mongo first
[21:52:14] <PedroDiogo> god damn weather..
[21:57:40] <Streemo> Is there a way to query a denormlized collection and have only desired collections come back?
[21:59:00] <Streemo> example: {field: {don't want},{want},{want},{don't want},{don't want}}. Do a query so that i get back {field: {want}, {want}}. like projecting out the ones that don't mathc my query
[21:59:43] <Boomtime> http://docs.mongodb.org/manual/tutorial/project-fields-from-query-results/
[22:01:42] <Streemo> I don't think a basic projection is what i am talking about
[22:02:20] <Streemo> I'm thinking more of a projection of array elements, kind of like elemMatch but allowing all array elements which match my query, not just the first one
[22:05:03] <Boomtime> Streemo: you can't select "all" matching array elements at this time, i think there is a server feature request open for that
[22:05:21] <Streemo> ok
[22:05:29] <Streemo> would you recommend using aggregation as a workaround?
[22:05:42] <Boomtime> your choices are basically to select only the first element that matched, or the whole array
[22:05:57] <Boomtime> yes, aggregation can do exactly what you want
[22:06:47] <Boomtime> i recommend using two $match stages, use the first one to filter to just those documents which contain a match, then $unwind and $match again
[22:07:08] <Boomtime> the reason is because only the first $match in this case can use an index
[22:07:59] <Streemo> ok thanks, ill try it out
[22:09:52] <Streemo> oyu know what, I might as well normalize the data, since im gonna have to make two queries anyways
[22:41:57] <harttho> When dropping a database, what causes it to show up as (empty) instead of gone entirely?
[22:42:03] <harttho> using db.dropDatabase()
[22:50:09] <Boomtime> harttho: you're using the mongo shell?
[22:50:28] <Boomtime> try restarting that shell, you may be seeing a local cache ghost
[23:13:27] <fullstack> ghost
[23:14:36] <Gevox> Hello, im executing the following command in my mongo shell and it does not return anything. It just hangs there
[23:14:37] <Gevox> http://i.imgdady.com/bFY1TH.jpg
[23:14:41] <Gevox> Can someone explain to me what is wrong?
[23:16:16] <Boomtime> Gevox: are you referring to the last .find() issued?
[23:16:27] <Boomtime> i see no hang, it returned immediately with no results
[23:16:43] <Gevox> <Boomtime>: Yes, oh is that a return with no result?
[23:16:54] <Boomtime> .find() returns a cursor, if you don't store it anywhere then the shell will try to print the results
[23:17:06] <Boomtime> (or at least the first few results)
[23:17:12] <cheeser> 20
[23:17:57] <harttho> Boomtime: Twas a caching issue, thanks
[23:18:49] <Gevox> Boomtime: Can you please look at the result of the db.Events.find() - do you see? the record i'm looking for using db.Events.find({title: "Title #1"}) exists, though you said it does not return anything
[23:19:28] <Boomtime> how do you know the record exists?
[23:19:41] <Gevox> because its there at the list of results i got from the .find()
[23:19:47] <Gevox> it showed me all the records in the db already
[23:19:56] <Boomtime> show me that output
[23:19:58] <Gevox> oh you dont see the title, sorry
[23:20:22] <Gevox> Boomtime: http://i.imgdady.com/ciU26l.jpg
[23:21:58] <Boomtime> check your title field values versus what you are looking for
[23:22:09] <Boomtime> "Test #1" != "Title #1"
[23:23:31] <Gevox> db.Events.find({"title" : Title #1})
[23:23:31] <Gevox> 15-04-18T01:22:28.447+0200 E QUERY SyntaxError: Unexpected token ILLEGAL
[23:23:48] <Gevox> i think because of the "#" that's why i putted it in between quotes
[23:24:24] <Boomtime> Gevox: your documents have the value "Test #1"
[23:24:45] <Boomtime> a search for the value "Title #1" quite correctly does not match
[23:25:03] <Gevox> Boomtime: You have the right to kill me, sorry.
[23:25:05] <Gevox> but yet, > db.Events.find({"title" : Test #1})
[23:25:06] <Gevox> 2015-04-18T01:23:49.222+0200 E QUERY SyntaxError: Unexpected token ILLEGAL
[23:25:21] <Boomtime> yeah, quotes is correct, that's just JSON syntax
[23:25:25] <Gevox> it worked with quotes
[23:25:41] <Boomtime> yeah, it worked in the sense that you got zero results
[23:26:17] <Boomtime> ok, you've corrected the search term and with quotes is correct, that's just the way JSON syntax is
[23:27:04] <Gevox> Ok, can you figure out why this method never returns true then? http://pastebin.com/e0QipYFf
[23:28:33] <Boomtime> 2 suggestions: print exceptions, and print when it reaches 'return false'
[23:28:46] <Boomtime> otherwise you don't know where that code is going
[23:30:04] <Gevox> Boomtime: Would it be too much if i asked you to show it for me? I don't know what exception should i throw and catch
[23:30:31] <Boomtime> i have no idea either, it's your code
[23:30:45] <Gevox> ok, thank you
[23:30:48] <Boomtime> you have a try/catch, but you don't print anything when it hits the catch
[23:31:02] <Gevox> i don't have a catch
[23:31:02] <Boomtime> just print something, preferably the exception message
[23:31:21] <Boomtime> actually, you're right, you don't either..
[23:31:29] <Gevox> the catch needs a parameter (the exception type). What is it?
[23:31:32] <Boomtime> put one in, catch everything, print a line
[23:31:33] <Gevox> this is what i need to know
[23:31:55] <Boomtime> dunno, what is that? java? how to do you catch all in java?
[23:32:15] <Gevox> catch (Exception e)
[23:32:15] <Gevox> {
[23:32:15] <Gevox> System.out.println(e);
[23:32:15] <Gevox> }
[23:32:56] <Boomtime> are you trying to learn mongodb and java simulataneously?
[23:33:37] <Boomtime> what you printed looks about right, but i'm not big on java sorry
[23:35:00] <Gevox> Boomtime: only mongodb, but i don't use exceptions usually, the compiler handles them for me
[23:35:10] <Gevox> i know them, but i don't paractice them much
[23:36:00] <Boomtime> there is no compiler in the world that can handle exceptions for you, that is contradiction in terms, exceptions are exceptional, you need to handle them or your program will simply crash when they happen
[23:46:42] <d4rklit3> hi
[23:47:31] <d4rklit3> if i have a collection, Project, Project.categories = [ObjectID(abc1234),...], How do i query Where(project.categories contains category.slug === 'digital') or something to that effect?
[23:47:36] <d4rklit3> is this possible
[23:52:30] <d4rklit3> it seems like i need 2 steps for this
[23:53:02] <Gevox> <d4rklit3>: I'm day mongoDB guy, but i think you can it through doing a search for key category, then take that as output for a 2nd method that looks after slug in this category. I can write some code if you want
[23:53:11] <Gevox> input*
[23:53:56] <d4rklit3> i think this is wat i need http://stackoverflow.com/questions/11303294/querying-after-populate-in-mongoose
[23:54:08] <d4rklit3> i am using mongoose
[23:55:12] <Gevox> idk what is mongosse, sorry
[23:55:27] <Gevox> but the post you sent, is related
[23:55:39] <Gevox> "you don't say?" - i see it on your face.
[23:56:32] <d4rklit3> heh
[23:56:55] <d4rklit3> this seems like someting to use sql for
[23:56:56] <d4rklit3> lol