PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 15th of July, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:31:14] <sessy> When using non ascii charaters on osx 10.6 backspace will REPLACE the non ascii chars with a ?, or delete 2 chars if deleting befre the non ascii char.
[00:31:36] <sessy> this is while using mongo console
[00:37:05] <deoxxa> sessy: actually no
[00:37:27] <sessy> Seems to be https://jira.mongodb.org/browse/SERVER-2939
[00:37:48] <deoxxa> sessy: what it's doing it deleting one byte, which leaves an invalid utf8 sequence, which displays as a "?"
[00:37:52] <deoxxa> just wanted to make that distinction
[00:38:20] <sessy> Still leaves me with a broken command :)
[00:38:46] <deoxxa> correct
[00:40:20] <deoxxa> makes me wonder why they don't just use readline or something that works properly
[00:40:53] <sessy> Will dev 2.1.2 work? Going to try now..
[00:41:17] <deoxxa> looks like it will
[00:43:49] <sessy> Yes it did. 2.1.2 is ok. THX
[01:34:40] <jmar777> we're running a replica set, with very low volume, but consistent writes (~10/minute). all of the sudden, queries have begun returning with very high latency (~20 seconds, whereas before they were just a couple ms)
[01:35:12] <jmar777> the only lead I think I've found so far is that mongotop reports local.oplog.rs at 1000ms for both read and write, consistently
[01:35:34] <jmar777> any pointers on where to dig next? not finding much on google regarding really high oplog read/write times
[10:27:31] <prabuinet> hi
[10:39:48] <mids> hey prabuinet
[10:40:54] <prabuinet> hi mids,
[10:43:41] <DigitalKiwi> hrm
[10:44:38] <DigitalKiwi> if storing stuff with gridfs to be served with the nginx module, is it better to generate a unique filename or is it alright to use the id?
[10:45:20] <mids> id is already unique
[10:46:48] <DigitalKiwi> i know with like incremental IDs such as in postgres they say not to use the id as it says too much about your data, but i don't know if that stance applies to mongo ids too
[10:51:15] <jY> it does not
[10:51:34] <jY> cause mongo ids are not incremental
[10:53:19] <DigitalKiwi> k
[10:55:01] <DigitalKiwi> one more for now, when you link to other document ids, and then that document might be deleted, what's the right thing to do? which I guess is probably pretty app specific...what are some options?
[10:55:25] <mids> DigitalKiwi: http://www.mongodb.org/display/DOCS/Object+IDs#ObjectIDs-BSONObjectIDSpecification
[10:56:01] <mids> in what case do you link to document ids?
[10:57:07] <DigitalKiwi> users are the first that come to mind
[10:58:30] <kali> DigitalKiwi: my approcach is to never delete anything :)
[10:58:44] <kali> DigitalKiwi: i just flag deleted documents
[10:58:46] <DigitalKiwi> i'd considered that
[11:00:31] <DigitalKiwi> does it work pretty well?
[11:13:37] <kali> DigitalKiwi: it does work. but you need to make all the queries aware of the flag in the app, and the indexes too
[11:17:22] <DigitalKiwi> would as far as the indexes go it pretty much just be a matter of adding the deleted flag to whatever indexes you already have?
[11:19:01] <kali> DigitalKiwi: yes
[11:20:47] <DigitalKiwi> do you just use a boolean or a date of deletion or does it depend?
[11:21:02] <Derick> a boolean is a lot smaller...
[11:21:37] <DigitalKiwi> true
[12:51:13] <jondot> is there any way to do findAndModify over a collection (and also get that collection back) ?
[16:25:49] <ahri> hi, i'm pretty new to mongo but i'm trying to use it for a project (to learn these weird ways ;)) and i don't really understand how to get what i would with SQL with a max() -- https://gist.github.com/3117632 is my collection of node documents and i want to pull out max value i have for the "paths.depth" property. can anyone point me in the right direction?
[16:27:18] <ahri> i had the idea to map/reduce them but got tripped up because "paths" is a list, and then i found out about aggregation and simply got confused!
[16:28:26] <ahri> with the map/reduce approach i guess i'd have to do two passes; one to get the max depth per document, then another to get max depth for the collection?
[17:08:39] <dob_> Is there any possibility to make a case insensitive sort on a mongodb?
[17:18:16] <Derick> dob_: only with a regular expression search - alternatively, you could store a lowercase variant of the text in a seperate field and search against that
[17:39:36] <dob_> Derick: How will that regex look like. If i want every entry in the database?
[17:39:40] <dob_> What about the performance?
[17:39:53] <dob_> sorry, in the collection
[17:40:01] <Derick> it won't perform well as it can't use an index
[17:58:12] <wakawaka> Hi.
[17:59:56] <wakawaka> I swapped from 1 512 GB SSD to a raid-0 with 512+256 GB SSD. And now the read-performance in MongoDB is kinda sucky :/ .
[18:00:10] <wakawaka> The Production Notes page says "RAID-0 will provide good write performance but provides limited availability, and reduced performance on reads. It is not suggested.".
[18:00:36] <wakawaka> Does RAID-0 reduce read-performance by much?
[18:19:45] <wakawaka> Im getting reads up to 300 Mb/s according to IO-top tough.
[18:23:29] <wakawaka> At the moment mongo is under heavy read and modest write. Writes ~2Mb/S Reads ~300Mb/S
[18:23:56] <wakawaka> I have not changed any indexes.
[18:24:43] <wakawaka> Lots and lots of querys > 100ms where we hit a index. Querying with explain revelas that index is used both for finding and sorting.
[18:25:29] <wakawaka> Simple querys suck as {'product_hash': 'asdasdasdasd} sort {'date':-1}
[18:25:41] <wakawaka> have index product_hash_1_date_-1
[18:26:46] <wakawaka> That index is 6gb big, the RAM in the machine is 16gb.
[18:26:59] <wakawaka> All in all there are 45GB worths of indexes on this machine....
[18:30:47] <dob_> how does a regex find work to get all entries of a collection with ignoring case when sorting?
[18:30:58] <dob_> Do you have an example for me?
[18:33:10] <_johnny> dob_: perl regex: { title: /asd/i }
[19:03:29] <brillopad> Hi guys - I've got a collection that has four elements that I'll want to index. One of them is the date, and just ONE of the other indexes will be used in conjunction with the date per search. Is it best to create three compound indexes with each of these elements and the date together? Or should I do each one seperately?
[19:03:43] <brillopad> Sorry, I'm new to this, just learning the joys of Mongo ;)
[19:14:30] <UnSleep> there is some limit at 1,274,580 documents in a collection?
[19:15:06] <brillopad> UnSleep - using 32-bit or 64-bit?
[19:16:02] <UnSleep> its in mongolab i think its only posible to use mongo correctly in 64bits, isnt it?
[19:16:44] <brillopad> UnSleep - you can use mongo in 32-bit mode, but there's a file size limit of 2GB
[19:17:45] <brillopad> I've just checked, mongolab uses 64-bit
[19:17:45] <brillopad> Hmmm...
[19:17:45] <brillopad> Strange.
[19:17:46] <UnSleep> the databade is now 72mb with a limited space of 240mb. i think i must be doing something wrong
[19:17:53] <brillopad> Are you getting an error when you're adding new documents?
[19:18:49] <UnSleep> i just created a new one into the mongolab cp
[19:18:54] <UnSleep> mmmm
[19:21:21] <UnSleep> [err] => quota exceeded
[19:21:32] <UnSleep> [code] => 12501
[19:21:36] <UnSleep> :-?
[19:22:38] <brillopad> UnSleep - what's your quota?
[19:22:46] <brillopad> Oh, you said 240mb
[19:23:08] <brillopad> Couple of things that might add to the size: journal entries - have you added and deleted a ton of documents?
[19:23:30] <UnSleep> i added a million in one minute
[19:23:35] <brillopad> Try running db.repairDatabase()
[19:23:48] <brillopad> That should remove any documents that have been deleted but are still in the journal.
[19:23:52] <brillopad> Will take a bit of time for that to run
[19:24:55] <UnSleep> its probably that mongo is just trying to create that million?
[19:25:33] <UnSleep> "message": [ "The repairDatabase command was issued, but the request timed out.", "Usually this means the command was successfully started in the database,", "but it will take a while to complete." ]
[19:25:57] <UnSleep> mm strange i did not deleted nothing latelly
[19:26:33] <UnSleep> i only was trying to reate a 10 million documents to play with the index and "relations"
[19:29:09] <UnSleep> oh i think it just solved doing that!
[19:31:12] <UnSleep> do you think its a good idea to make a repair periodically and automatic?
[19:38:47] <Bilge> Do you think it's a good idea to contract aids periodically?
[19:39:32] <UnSleep> yes it is becasue you will get inmune :)
[19:58:59] <vsmatck> How do I drop a collection named "user:session" in the mongo shell? It doesn't like the ":".
[20:02:41] <wakawaka> vsmatck: try escaping : -> "user\:session"
[20:04:34] <vsmatck> I'd tried that already. Still didn't like it.
[20:05:21] <vsmatck> I can just drop database and use underscore instead. Figured I'd try to learn how to deal with this for future reference.
[21:13:30] <dob_> How can I ignore the case when sorting?
[21:17:33] <dob_> using regex on find will not handle the case. I tried db.collection.find({name: /./i}).sort({name:1})
[22:03:03] <vsmatck> Ooh! Background indexes don't block replicas in 2.1? Nice.
[22:04:10] <vsmatck> That really simplifies things for people using replica sets for read scaling.
[22:04:39] <vsmatck> s/indexes/indexing