PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 6th of May, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[07:58:55] <Petazz> Hi! How can I import (restore?) this sample collection? http://docs.mongodb.org/manual/reference/bios-example-collection/
[08:08:07] <Petazz> I guess that is what the shell output is, is there any way of importing it nicely?
[09:14:11] <amitprakash> Hi, is it possible to allow users to drop documents but not entire collections?
[09:21:48] <joannac> amitprakash: http://docs.mongodb.org/manual/reference/privilege-actions/
[10:38:28] <mcl_wtn> hi all, I've got a question in regards to mongoldb deployment - I've got three servers that I can use, so from the docs I believe I have enough for a 3 node replica set - however I just want to make sure that I am not needing a routing server in this setup. From the MMS docs it does not talk about routing servers even in the sharded replica set tutorials... any guidance on this will be much appreciated :)
[10:46:42] <fxmulder_> well now mongodb is crashing
[10:46:43] <fxmulder_> Wed May 6 04:38:41.833 [journal] LogFile::synchronousAppend failed with 12632064 bytes unwritten out of 12632064 bytes; b=0x7f2d67f04000 errno:5 Input/output error
[11:03:49] <remonvv> \o
[11:05:43] <Derick> hi
[12:28:46] <leporello> Hi. A question again :)
[12:29:10] <leporello> I have a mongoose schema with array of nested documents (rating)
[12:29:39] <leporello> How should I update an element in this array?
[12:30:04] <StephenLynx> update operation using the push operator.
[12:30:24] <leporello> StephenLynx, but if I need to update existing field?
[12:30:48] <StephenLynx> ?
[12:30:53] <StephenLynx> ah
[12:30:54] <StephenLynx> I see.
[12:31:00] <StephenLynx> ok
[12:31:03] <leporello> two steps update?
[12:31:05] <StephenLynx> no
[12:31:24] <StephenLynx> in the query block you do something like {ratings:your match condition}
[12:31:37] <StephenLynx> and on the update block you do {'ratings.$':the value}
[12:31:48] <StephenLynx> $ will be replaced by the index found on the match block.
[12:31:52] <StephenLynx> query*
[12:32:07] <leporello> aha. I'll try, thanks
[12:32:13] <StephenLynx> just don't forget to use the $set operator
[12:32:26] <leporello> and upsert: true?
[12:33:59] <StephenLynx> that depends.
[12:34:08] <StephenLynx> if you want to upsert :v
[12:34:16] <StephenLynx> upsert is related to the document.
[12:34:19] <StephenLynx> not subdocuments.
[12:34:33] <leporello> seems it will be easier to make two routes
[12:34:40] <leporello> for creating and modifying
[12:34:47] <lost_and_unfound> Greetings, appologies if I am using the incorrect terms. I would like to get an idea which collection (tables) are using up the most disk space on a database. i tried > db.stats()
[12:36:51] <cheeser> you can call stats() on each collection.
[12:39:01] <lost_and_unfound> cheeser: Thanks, seems to be inline with what i require
[13:00:26] <cheeser> np
[13:14:15] <jecran> Hi guys
[13:14:48] <jecran> Wondering is it possible to export the mongo command line results to a text file?
[13:15:06] <deathanchor> jecran: mongo --eval is your friend
[13:15:16] <jecran> deathanchor: thanx
[13:15:39] <deathanchor> jecran: you should really just make a js script and then run it like mongo script.js > outfile
[13:16:02] <deathanchor> if you want an example of a simple scipt I can gist it for you
[13:16:24] <jecran> Plz. You are very helpful deathanchor :D
[13:19:46] <deathanchor> https://gist.github.com/deathanchor/461baaee48ac569445c4
[13:20:03] <jecran> Cool I will check it ou
[13:21:47] <deathanchor> jecran: fyi, if you are using newer mongo, you should MongoClient instead of Mongo class
[13:22:03] <jecran> deathanchor: I am new to this. dbo.adminCommand , is that an exec function?
[13:22:27] <deathanchor> jecran: it's the same as db.adminCommand in the shell
[13:23:30] <jecran> cool thanks
[13:43:35] <fxmulder_> this is not going well for me
[13:43:48] <fxmulder_> mongodb won't even start anymore on this replica
[13:44:13] <fxmulder_> it doesn't output or log anything, if I run it through strace I see I am getting a sigbus
[13:46:44] <hqxor> Hello, can anyone help me building cxx driver please
[13:51:02] <hqxor> hello..
[13:51:04] <cheeser> fxmulder_: did you try running mongod directly with that config file?
[13:57:48] <jecran> deathanchor: Your code works great in the command line. I am actually using 'exec(mongo showdbs.js)'. I am getting the full result, not just the names of the databases. Any ideas?
[13:58:37] <fxmulder_> cheeser: I did, I ended up cleaning out the data directory and it started
[14:01:17] <lost_and_unfound> Greetings, is there an alternative to the db.repairDatabase() to reclaim space. like a config setting? we recently noticed one of the collections are written to and deleted from intensively and the space is not reclaimed. So our current option to reclaim te space is the run the db.repairDatabase() which comes with it own challenges
[14:02:28] <lost_and_unfound> it is our db.Collection.chunks that grows exponencially
[14:03:00] <deathanchor> jecran: sorry a bit busy with work, did you modify the script?
[14:03:48] <jecran> Yes. Now I am getting the FULL collection result lol
[14:04:05] <jecran> Well, not just the database names anyway
[14:04:06] <deathanchor> gist the code
[14:05:32] <cheeser> lost_and_unfound: repair is the only way. or use wiredtiger
[14:12:35] <lost_and_unfound> cheeser: the problem came in where a file is requested from the data (+-250K line entries) and is periodically retried. Within a span of 1 hour the mongodb grew by 4GB and the space was not reclaimed from the OS/FS. The data is statictical and periodical, so no more than 3 months worth of data is kept. Will http://docs.mongodb.org/manual/core/capped-collections/ be advisable then?
[14:13:05] <cheeser> possibly, yeah
[14:22:39] <jecran> deathanchor: the showdbs.js. Perfect result in the command line https://gist.github.com/galas0330/395d57e61c146f1f31df
[14:24:00] <jecran> deathanchor: https://gist.github.com/galas0330/92f3b07bdb559c9f5a91 I have tried a few variations of this, and the end result is one MASSIVE log of data lol
[14:26:52] <jecran> I am trying to get the simple 3 lines of console data and pipe it back into a .js file.... so close
[14:27:03] <lost_and_unfound> cheeser: Form what I understand from the capped-collections, size is limited per collection. Currently we log all data to a single collection. So this means we should rather create multiple collections based on the stats data... currently db.StatsAll would then become db.StatsProductX, db.StatsProductY, db.StatsProductZ
[14:28:49] <cheeser> lost_and_unfound: size is limited in bytes or document count. might not be the best option for time bound caps
[14:31:13] <lost_and_unfound> cheeser: understood, however I can look at the current rate and count of data logged per month and make a prodiction on the required size required to house the data within the timeperiod required
[14:42:33] <pjammer> does anyone use a single replica set in production? If so, how are you ensuring that your CNAME is always pointed to the primary after an election?
[14:43:48] <deathanchor> pjammer: you specify all the members server1:port,server2:port,server3:port
[14:44:03] <pjammer> with mongos and sharding, i get that you can point it to the 'server' where this is set but if an election happens at 3 in the morning, aren't you hosed?
[14:45:22] <cheeser> the seed list is just the entry point. the driver will find your primary from the cluster definition
[14:45:49] <deathanchor> pjammer: http://docs.mongodb.org/meta-driver/latest/legacy/connect-driver-to-replica-set/
[14:47:24] <pjammer> so i set this in mongoid
[14:49:10] <pjammer> thanks guys and gals
[14:49:29] <pjammer> mind you i think i am getting sewered by mongoid's magic.
[14:49:40] <pjammer> or the guy writing the queries.
[14:49:58] <pjammer> in my app.
[14:55:26] <deathanchor> how do I negate a { $regex : /somthi/ }?
[14:57:29] <deathanchor> got it $not
[14:58:42] <deathanchor> crap that doesn't work
[14:58:45] <deathanchor> can't use $not with $regex, use BSON regex type instead
[14:59:00] <deathanchor> oh
[14:59:27] <deathanchor> { $not : /somthing/ }
[15:14:38] <jecran> deathanchor: I ended up just outputting > test.json file. Is it possible to remove the console header data? here is my output. https://gist.github.com/galas0330/ce55eaa37f7ceac8e52e Do i need to manually edit this file to remove the first couple of lines?
[15:17:49] <deathanchor> you can do a | tail +2 to skip the first line
[15:18:43] <deathanchor> sorry | tail -n +3 to skip first 2 lines
[15:18:49] <cheeser> mongo -q
[15:18:57] <cheeser> sorry --quiet
[15:19:02] <deathanchor> cheeser: thx, new info for me :)
[15:20:30] <jecran> cool
[15:20:38] <jecran> tail is bash only, right?
[15:21:07] <deathanchor> it's a unix command
[15:21:20] <jecran> ok
[15:21:23] <deathanchor> but use mongo --quiet like cheeser said
[15:22:13] <pamp> hi
[15:23:41] <jecran> Yes --quiet works. Finally got the 2 lines of .json text I was looking for lol..... thanx guys
[15:25:02] <pamp> Which one is the best for mongodb cluster performance, two shards with robust server's (8 cores, 64gb ram, 3tb hdd) or 4 shards with less robust machines (4 cores, 32gb ram, 1,5tb hdd)
[15:25:57] <deathanchor> pamp, depends on use-case and what you shard on.
[15:28:20] <pamp> the main requirement is reading, but once a day we have to do heavy writes
[15:28:50] <deathanchor> why not use the secondaries for reads also?
[15:30:09] <pamp> but overall what is the best approach, robust machines, or more machines but less robust
[15:31:28] <pamp> deathanchor : I dont understand what you say
[15:31:51] <pamp> with why not use secondaries for reads
[15:32:53] <deathanchor> depending how your app works and driver, you can set readPreference to SecondaryPreferred so that the load gets spread out to the other members of the set.
[15:33:47] <deathanchor> I only shard when upgrading machines is no longer an option.
[16:19:13] <dewie01> Hi
[16:19:52] <dewie01> I have made a very stupid mistake by using a mongorestore with --drop to a wrong host (production)
[16:20:06] <dewie01> is there any change of restoring with the journal to a point in tim e
[16:20:39] <dewie01> i can rollback the changes with the website logging, but that would be al lot of work
[16:20:46] <dewie01> just wanted to ask ..
[16:20:57] <dewie01> any help is appreciated
[16:23:40] <GothAlice> dewie01: Alas, I do not believe you can use the journal to recover from drop commands.
[16:25:39] <dewie01> i was afraid of that
[16:26:21] <GothAlice> dewie01: This doesn't help for this incident, but in the future I can highly recommend setting up http://docs.mongodb.org/manual/core/replica-set-delayed-member/
[16:26:42] <GothAlice> dewie01: We have one of these in-house delayed by 24 hours to allow for easier recovery from user error. ;)
[16:27:13] <dewie01> Gothallice: thanks :) stupid me
[16:27:41] <GothAlice> Mistakes happen; the trick is to learn from them.
[16:27:44] <GothAlice> :)
[16:27:59] <dewie01> clients often don't see it that way :)
[16:28:40] <GothAlice> This is true. "Do you have backups?" means "I can't fix this." (Alice's Law #105.)
[16:30:07] <GothAlice> dewie01: Could be worse. After an Amazon AWS cross-zone failure I had to recover MySQL InnoDB data by reverse engineering the on-disk format. ¬_¬ Took three days… prior to getting my wisdom teeth extracted. Fun times.
[16:49:48] <jecran> deathanchor: final result. With the --quiet and outputting to a .json, this is the final snippet that produces a well formed .json file. Thanx for your help! https://gist.github.com/galas0330/17fe7ea1ac79fdbde6ad
[16:53:13] <deathanchor> HTH
[17:35:30] <frugaldba> has anyone run into a surprise with mongo's new support pricing? (in 2013 you paid per host, regardless if it was a physical or VM. in 2014 you could pay once per host, no matter how may mongo VM's you had.) Now they are back to the 2013 pricing model, effectively doubling out support cost.
[17:36:02] <frugaldba> I've seen mongo hosting, but is there any third party mongo support out there?
[17:38:31] <GothAlice> frugaldba: For large-scale deployments, TokuMX may be viable. I haven't compared pricing, though.
[17:39:07] <GothAlice> (TokuMX is a fork of MongoDB with optimizations for large datasets and high-throughput, transaction support, compression, etc., etc. using "fractal trees" instead of btrees.)
[17:40:14] <StephenLynx> any compromises on their design?
[17:41:08] <deathanchor> don't know about support
[17:43:46] <frugaldba> The price might make using it without support an attractive option for us, actually.
[17:46:25] <GothAlice> frugaldba: MongoDB is also free to use, BTW.
[18:28:23] <deathanchor> good to know GothAlice, and you use this in production?
[18:28:38] <GothAlice> deathanchor: I do.
[18:29:07] <GothAlice> MMS also lets you provide your own mongo binaries; this lets me use MMS while continuing to use my enterprisy-non-Enterprise version.
[18:29:49] <GothAlice> (Admittedly with the proviso that if anything goes wrong, it's my fault. ;)
[18:32:53] <deathanchor> of course, everwhere I work, I tell people my middle name is Scapegoat
[18:33:51] <deathanchor> I take the blame for things that are completely out of my control. it sets a precedence that blaming is a waste of time and to fix the issue, then fix the root cause instead of playing blame games
[18:56:16] <StephenLynx> daium
[19:07:39] <tehgeekmeister> running into this error I can't find much on the web about. anyone have thoughts? https://gist.github.com/tehgeekmeister/85a93c3c736c0b095c1a
[19:08:17] <tehgeekmeister> i'm basically trying to do a dump with --oplog, and getting an error about it only working with certain auth schemas (which apparently I'm on a newer one of, that is not yet supported)
[19:09:27] <kakashiA1> is there a way to tell mongoose to get the document that has the key name with the value "Peter", even if you query
[19:09:29] <kakashiA1> it with peter?
[19:10:10] <GothAlice> tehgeekmeister: What mongod version is running, and what's the version of mongodump you are trying to use? (mongodump --version)
[19:10:20] <tehgeekmeister> checking
[19:11:51] <GothAlice> kakashiA1: Field names are case sensitive strings. There may be a way (examine mongoose's concept of "middleware", ref: http://mongoosejs.com/docs/middleware.html) but it's generally bad form to implement things like that (and potentially end up with inconsistent keys), esp. as other tools won't know about that translation.
[19:12:41] <tehgeekmeister> 2.6.6 for mongodump, and 3.0.2 for mongod. which, it makes sense, might not work. =P
[19:12:57] <GothAlice> kakashiA1: Unless your question is actually about querying _values_ insensitively. That is possible: http://stackoverflow.com/questions/1863399/mongodb-is-it-possible-to-make-a-case-insensitive-query
[19:13:24] <GothAlice> tehgeekmeister: Yup. To quote Mythbusters, "Well, there's your problem!"
[19:13:25] <GothAlice> ;)
[19:13:44] <kakashiA1> GothAlice: RegEx :/
[19:14:30] <GothAlice> kakashiA1: Yup. The other choice is to denormalize your data: store the original value, and a lowercase'd version, then query the lowercase'd version if you want to search insensitively.
[19:14:35] <tehgeekmeister> I mean, it would be impressive if mongodump remained totally future compatible, but.
[19:14:46] <tehgeekmeister> I do not expect such wizardry of anyone.
[19:15:07] <deathanchor> a witch!
[19:15:50] <kakashiA1> GothAlice: or use a find middleware, that generates all query combinations for that case, okay thanks!
[19:15:53] <tehgeekmeister> got a closer-to-what-i-want error message now.
[19:16:06] <tehgeekmeister> "No operations in oplog. Please ensure you are connecting to a master."
[19:16:14] <GothAlice> kakashiA1: Please don't do that. Mongoose is bad enough as it is. ;)
[19:16:16] <tehgeekmeister> I know there's at least one document in there
[19:16:29] <tehgeekmeister> maybe i need to have done some configuration before I can use the oplog backup option?
[19:16:35] <GothAlice> tehgeekmeister: Is your monogod operating in a replica set?
[19:16:50] <tehgeekmeister> nope!
[19:17:04] <GothAlice> That's a bit of a hard requirement for use of the --oplog option. ;)
[19:17:08] <GothAlice> (Without it running in a replica set, there is no oplog.)
[19:17:11] <tehgeekmeister> I'm happy to put it in a single replica replica set
[19:17:24] <GothAlice> You'll need to add an arbiter for that to work, but it would work.
[19:17:33] <tehgeekmeister> arbiter?
[19:17:36] <GothAlice> Actually, two arbiters. Hmm.
[19:17:44] <tehgeekmeister> i can look it up, but if you have a quick explanation, that's cool.
[19:17:49] <GothAlice> Arbiters are mongod processes that vote on primary elections, but don't store data.
[19:18:07] <GothAlice> If you have a single node replica set, that replica set will freak the heck out. ;)
[19:18:24] <tehgeekmeister> hmm, can't you do master slave style replication? thought you could.
[19:18:29] <tehgeekmeister> does that use the journal instead?
[19:18:32] <GothAlice> Not any more; that's been deprecated.
[19:18:40] <GothAlice> (It's not fault tolerant like replication can be.)
[19:18:43] <tehgeekmeister> as of 2.6?
[19:18:53] <tehgeekmeister> I downgraded, I intended to be on the older version.
[19:18:54] <GothAlice> I believe it was initially deprecated in 2.4.
[19:18:58] <tehgeekmeister> alright
[19:19:25] <GothAlice> Why intend to have an out-of-date deployment from the get go?
[19:19:46] <tehgeekmeister> you say out of date, I say tested.
[19:20:07] <tehgeekmeister> I read the jepsen blog posts. I'm not comfortable with 3.0 yet.
[19:20:24] <tehgeekmeister> Also, I'm not sure if the clients we have would tolerate 3.0 yet, i'd have to investigate that.
[19:21:37] <GothAlice> tehgeekmeister: In terms of normal operation, 3.0 is no different than 2.6 in virtually every way. (It defaults to mmapv1, the same storage engine, etc.) There's a schema bump for authentication (version 3.0 introduces a much stronger authentication protocol) but that's the biggest hurdle.
[19:22:34] <tehgeekmeister> looks like all the data loss bugs i read about were wiredtiger, which isn't default yet, is it?
[19:22:37] <GothAlice> tehgeekmeister: After running my app locally with 3.0 for a week or two, and really hammering it (breaking the wiredtiger storage engine quite badly in my tests) I deployed a standard mmapv1 setup in 3.0 on MMS with effectively two clicks. My application (after bumping the client driver versions for compatibility) didn't even notice.
[19:22:44] <GothAlice> WiredTiger is not the default, no.
[19:23:25] <GothAlice> (It's also got memory issues… those are the big ones I hit. The primary would crash after ~15 seconds of heavy reads and writes. Basically, there was an election every 15-20 seconds during my testing. It was horrifying to watch.)
[19:27:57] <tehgeekmeister> https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-reads <== this is what i was trying to avoid.
[19:28:10] <tehgeekmeister> It looks like there's settings that can make it work, but
[19:28:22] <tehgeekmeister> it's concerning, and I'd like to avoid it as much as possible.
[20:10:10] <tehgeekmeister> GothAlice: looks like i can have a single replica replica set and it gets to a happy state
[20:10:23] <tehgeekmeister> saw a stack overflow post implying this, so i decided to check it out
[20:10:26] <tehgeekmeister> have verified now
[20:10:37] <tehgeekmeister> not sure if that's documented/desired behavior, but it happened
[20:24:49] <shlant> hi all. Do users get replicated in replica sets? or do I need to create those users on each member?
[20:27:05] <kakashiA1> how can I create a query in mongoose to get all documents with the month january and the year 2014?
[20:50:26] <Gevox> shlant: everything gets distribuited at your replica sets, you do not need to create anything manually.
[20:52:45] <Gevox> kakashiA1: I don't how to write this in mongoose, but in regular mongo (which i think will be similar). You create a dbobject with the properites you looking for e.g myDbObject.put("month", "january") and you pass it to the cursor find method
[20:53:55] <jecran> https://gist.github.com/anonymous/6d9a96db3a30352334e1 ..... Hi guys. Here is a link to my code (I have tried using mongodb and mongoose all with the same result). I cannot control my flow at all. The last console.log at the end of the page is the first to appear in my console, then the function is run. Any ideas?
[20:54:28] <jecran> I am getting the proper results otherwise**
[20:56:05] <kakashiA1> Gevox: hmm, didnt get your example, but let me see
[20:58:54] <Gevox> kakashiA1: code sample (in java) http://pastebin.com/3R9GRdez
[21:01:18] <kakashiA1> thanks Gevox, will try to find a mongoose way
[21:04:56] <Gevox> jecran: Have you tried to change the scope of where "data[]" is defined?
[21:05:28] <jecran> Gevox: yes in multiple places. Inside and outside of the function. Inside the inner function lol
[21:05:58] <Gevox> jercan: so data gets printed empty then the function executes?
[21:06:53] <jecran> But again, the function isn't being fired until after the last line of the script is executed. So while I expect some data in the console, followed by "THE END: " I get "THE END: " as the first result in my console followed by everything else.
[21:07:25] <Gevox> what browser are you using?
[21:07:34] <jecran> Yes all correct except that last console.log should be last
[21:07:47] <jecran> no browser just in node
[21:08:15] <Gevox> i haven't done much js so don't expect an expert answer, but i can try with you. Let me verify something fist
[21:08:16] <Gevox> first*
[21:08:25] <jecran> Sure
[21:11:53] <jecran> Gevox: if you run this I hope you see what i see lol
[21:12:46] <Gevox> jeceran: http://jsfiddle.net/#&togetherjs=eg10LEHkjN
[21:13:30] <jecran> Gevox: im here
[21:14:00] <Gevox> see ? it does the regular scoping we expect
[21:14:45] <jecran> Yes. But enter mongodb connection.on or equivelant method and that flow changes. Even if you define the connection info first.
[21:15:17] <Gevox> put everything inside the function
[21:15:18] <Gevox> try
[21:15:29] <Gevox> except the data
[21:16:12] <Gevox> you know when you have some weird problem and you can't know the reason, you gotta to do some crazy things you wouldn't expect yourself doing this. But this what is left for you actually, so just do this :p
[21:18:26] <jecran> Gevox: ok im following along
[21:18:40] <Gevox> jercan: take what i putted in the fiddle and try it out
[21:19:11] <Gevox> jercan: http://pastebin.com/vG6JtMAT
[21:20:00] <Gevox> move the "start is done" to the very bottom of the start please.
[21:20:25] <jecran> :P
[21:21:49] <Gevox> did it work?
[21:22:06] <jecran> Have to rearrange a few other vars, I will let you know.
[21:26:43] <jecran> Gevox: no go. If you go back to the page, I did need to leave the var declaration at the top of the file, but I could declare them inside of the function. Same result though, the last console.log is still showing first :(
[21:27:44] <Gevox> jecran: take this code into ##javascript and they will get it solved for you in a nanosecond or so
[21:27:54] <jecran> lol thank you
[21:28:22] <Gevox> If you did this and nobody helped you, i can continue playing with it. But i'm telling you the fastest option you have right now
[21:30:29] <jecran> i appreciate the help. If i get this fixed in a timely manner I will def let you know the result
[22:08:14] <jecran> Gevox: no luck. tried a couple of rooms, and got a handle of links revolving javascript events lol. Back on the hunt
[22:08:21] <jecran> *handful
[22:26:16] <Gevox> jercan: i will let my friend look at this for you when he comes online, he is a js guy. do something else you might have and i will ping you back if he figured out the issue
[22:26:42] <StephenLynx> hey
[22:26:47] <StephenLynx> I am pretty handy with js too
[22:26:48] <StephenLynx> whats up
[22:34:42] <socratic> what is this place and why haven't i heard of it
[22:35:42] <StephenLynx> I dunno, I was just told there was going to be candy :c
[22:36:15] <socratic> can we say nigger in here?
[22:36:35] <StephenLynx> ¯\_(ツ)_/¯
[22:47:37] <jecran> Gevox: final answer, after a long time of playing: make my function a callback when I close my db connection. connection.close(callback); So stupidly simple with good results
[22:49:42] <Gevox> jecran: glad it worked eventually for you :)
[22:52:37] <jecran> Gevox: https://jsfiddle.net/af403eus/ if your interested :D
[22:53:02] <jecran> Your idea got me in that direction
[23:04:23] <laurentide> are the best instructions for installing mongodb at http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ ? I am copying and pasting the steps and failing on step 4
[23:04:29] <laurentide> i am using ubuntu 12.04
[23:05:30] <StephenLynx> what error are you getting?
[23:06:26] <laurentide> * shaisnir has quit (Remote host closed the connection)
[23:06:34] <laurentide> hah whoops
[23:06:48] <laurentide> pastebin.com/ajY0ZwaE
[23:08:07] <StephenLynx> hm
[23:08:43] <StephenLynx> it seems you did everything right so far
[23:09:54] <StephenLynx> try removing the already installed packages
[23:09:59] <laurentide> oeky doke
[23:10:08] <StephenLynx> because it seems there is an issue with the ones provided with ubuntu
[23:10:18] <StephenLynx> "You cannot install this package concurrently with the mongodb, mongodb-server, or mongodb-clients packages provided by Ubuntu."
[23:14:54] <laurentide> StehpenLybnx, thank you i'll give it a shot