PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 7th of February, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:58:16] <oinkon> with the python driver, how do i get lines out of a gridfs file as unicode?
[02:52:43] <taf2> i have a collection with a record that i've been recording a timestamp of events within
[02:52:54] <taf2> is it possible to have a capped set within a document?
[04:30:12] <mrandall> How do i rename a collection and update all of the references to use the correct new name?
[04:30:41] <mrandall> do i really have to update each record individually?
[06:06:01] <cinvoke> Hi , im following this tutorial and i can't seem to connect to mongo. i dont think i have mongo installed correctly. whats the easiest way for me to tell? http://docs.mongodb.org/manual/tutorial/write-a-tumblelog-application-with-django-mongodb-engine/
[06:12:45] <cinvoke> oh! nvm
[07:10:25] <HHELLD> Hi, All. Is it possible to disable listening on TCP port in mongod? (leaving only unix socket)
[08:35:14] <HHELLD> … ah, bind_ip can be a string to be a unix socket. I wonder if that was even intended.
[08:37:44] <[AD]Turbo> hi there
[11:08:14] <Nodex> http://www.dailymail.co.uk/sciencetech/article-2274388/MI5-install-black-box-spy-devices-monitor-UK-internet-traffic.html#axzz2KD6UaAd2
[11:08:15] <Nodex> :/
[11:14:42] <IAD> Nodex: https, vpn
[11:28:12] <Nodex> not really the point is it
[12:28:47] <remonvv> \o
[13:47:28] <synchrone_> hi everyone
[13:47:42] <synchrone_> so i'm using unicode filenames on my gridfs
[13:48:28] <synchrone_> but with cmd.exe , mongofiles -d db search <path part> gets me messed characters
[13:48:41] <synchrone_> i did chcp 65001, font => Lucida Console
[13:49:01] <synchrone_> but it only displays ANSI squares
[14:12:11] <BadCodSmell> Can mongo use an index like this: index = {a:1,b:1,c:1,d:1} db.abcd.find({a:123,c:321}).sort({c:1, d:1})
[14:12:26] <BadCodSmell> I have a case where explaining this shows that it is not using the composit index
[14:12:35] <BadCodSmell> Even though it should be able to
[14:13:00] <BadCodSmell> {d:1, c:1} I could understand
[14:28:30] <ExxKA> Hey Champs. I am going over old log files from the channel to find an answer to my question, but I can not seem to. How is replication different from a RAID 10 setup in concept? Aren't they both about data preservation and uptime?
[14:33:00] <Killerguy> how can I add shard in my replicat using mongo cli with --eval ?
[14:51:03] <ExxKA> j /#raid
[14:51:19] <ExxKA> Nope.. no such luck
[14:51:23] <ExxKA> :)
[14:51:23] <BadCodSmell> Can mongo use an index like this: index = {a:1,b:1,c:1,d:1} db.abcd.find({a:123,c:321}).sort({c:1, d:1})
[14:55:45] <sfa> I think you need to create two separte indexes on for{a:1, c:1} and the other for {c:1, d:1}
[14:56:26] <BadCodSmell> That's nuts
[14:57:05] <ExxKA> Did you try creating the index you suggested? BadCodSmell
[14:57:11] <BadCodSmell> Yes
[14:57:14] <BadCodSmell> It does not work
[14:57:23] <BadCodSmell> IT should do though, it's btree
[14:57:43] <BadCodSmell> Shouldn't it be a nested index? It should work with and portion in the find and the rest in the sort
[14:58:14] <BadCodSmell> For some reason it jumps to using a single index I have on b
[15:07:16] <ExxKA> BadCodSmell.. Have you tried doing more than 1? So one for your long find, and one for the sorts?
[15:12:22] <BadCodSmell> ExxKA: just {c:1} in the sort is fine
[15:12:34] <BadCodSmell> adding another though removes that ability to use the composite index at all
[15:12:41] <ExxKA> Hmm true
[15:14:51] <BadCodSmell> But why?
[15:15:13] <BadCodSmell> This DB is a joke if it can't even handle that
[15:15:34] <ExxKA> I asume it is because it chooses the shortest index that fits the criteria, but I am not skilled enough with mongo to answer that question
[15:16:34] <BadCodSmell> They should be nested in a tree structure
[15:17:16] <BadCodSmell> so it should just be jump to a.b then foreach(c as contents)foreach(contents as item)send(item)
[15:18:02] <BadCodSmell> IT should also be choosing the longest
[15:18:34] <BadCodSmell> if you have the indexes a.b and b it should be using a.b for find({a:,b:})
[15:20:00] <BadCodSmell> "All MongoDB indexes use a B-tree data structure. "
[15:21:01] <BadCodSmell> Do I have to do something stupid like create a single big text field to merge?
[15:21:29] <kali> {a:1,b:1,c:1,d:1} db.abcd.find({a:123,c:321}).sort({c:1, d:1}) you meant { a:..., b:...} in the selector right ?
[15:21:39] <BadCodSmell> yes
[15:21:53] <BadCodSmell> find({a,b}).sort({c,d}) can't index
[15:22:08] <BadCodSmell> find({a,b}).sort({c}) can
[15:22:21] <BadCodSmell> No explation anywhere that I can find
[15:25:50] <kali> BadCodSmell: it should work
[15:27:06] <BadCodSmell> Thanks, finally some confirmation.
[15:27:46] <kali> BadCodSmell: http://uu.zoy.org/v/digera#clef=nueherbrjxqracre
[15:45:19] <vlad-paiu> Hello. I seem to be having an issue with the Mongo C driver. I'm setting the MONGO_SLAVE_OK flag when performing reads, but the find queries are still directed to the primary server. Is this behavior not supported yet ?
[15:45:41] <Nodex> lenny deeeeee \o/
[15:46:08] <saml> so.. our mongodb has all string values . even for floats
[15:46:18] <saml> how do I do lt, gt queries ?
[15:46:28] <saml> do i need to convert all strings to floats first?
[15:46:38] <saml> can the conversion be made during query?
[15:47:23] <Nodex> convert first
[15:47:38] <saml> so craz.. i think it's because things are imported from csv file
[15:47:50] <saml> lazy peeps didn't even convert to numbers
[15:48:08] <Nodex> that's the bummer about importing from CSV
[15:58:35] <vlad-paiu> anybody with knowledge of the mongo C driver ? I took a peak at the MONGO_SLAVE_OK flag.. and it doesn't seem to be used anywhere in the C driver
[16:01:06] <bartzy> Hello
[16:01:16] <bartzy> if I insert 2 million documents per day to a collection
[16:01:31] <bartzy> and delete 95% of them
[16:01:44] <bartzy> will that cause a lot of fragmentation, big data files, other issues ?
[16:01:54] <Nodex> fragmentation certainly
[16:02:10] <bartzy> MongoDB doesn't claim storage for removed documents ?
[16:03:00] <Nodex> even if it did, it would still be fragmented until it were filled
[16:09:32] <vlad-paiu> Any help ? Or at least any info if this behavior is supported now or not ?
[16:09:44] <Nodex> dude, have some patience
[16:13:02] <vlad-paiu> ok.. patience mode engaged
[16:17:46] <Nodex> you should post in the group/google plus page aswel just incase
[16:17:56] <Nodex> not a lot of C people hang in here
[16:23:11] <algernon> vlad-paiu: fwiw, there's an alternative C driver, which does use MONGO_SLAVE_OK.
[16:24:05] <algernon> vlad-paiu: other than that, as far as I understand, you should be able to set the flag yourself when sending commands to mongodb, on a per-command basis. So the driver won't use it much more than defining the constant
[16:24:17] <algernon> but I haven't looked at the official driver in literally years.
[16:29:04] <vlad-paiu> algernon: thanks for the answer. I see the official driver takes the flags passed on a per command basis and just passes it in the mongo cursor.. but I see no sort of routing logic change being done whether the SLAVE_OK flag is set or not.
[16:29:28] <vlad-paiu> algernon: which alternative C driver ? :) I'd be happy to use it, if it supports this functionality
[16:30:00] <bartzy> NodeX: Sorry for the (big) delay.
[16:30:11] <bartzy> Nodex: What do you mean it would still be fragmented until it were filled?
[16:30:16] <algernon> vlad-paiu: https://github.com/algernon/libmongo-client
[16:30:19] <bartzy> Should I be worried of this kind of workload ?
[16:30:39] <Nodex> I said even if.....
[16:30:39] <bartzy> Lots of inserts (with binary data up to 1-2MB) - and then almost all of it is deleted.
[16:30:51] <bartzy> Nodex: OK - what should I worry about with this kind of workload ^ ?
[16:31:00] <algernon> vlad-paiu: routing logic isn't done by the official C driver, iirc. the flag just says that if whatever is on the other end is a slave, it's ok.
[16:31:08] <Nodex> in gridfs?
[16:31:40] <algernon> mine's a bit more clever, but it's not the smartest thing either.
[16:31:42] <Nodex> mongo pre-allocates files which grow bigger and bigger upto 2gb iirc
[16:32:09] <algernon> vlad-paiu: if you have questions wrt my lib, I'm happy to help (but got to rush home atm)
[16:32:12] <Nodex> if you're deleting a alot and things span these files you'll get large disk seeks for chunks
[16:32:20] <vlad-paiu> algernon: Ok I see.. thanks for your answers
[16:32:47] <bartzy> Nodex: Not necessarily gridfs. The binary data is up to 2MB ... so why use gridfs ?
[16:33:03] <Nodex> I didn't say use it, I was asking a question
[16:38:00] <vlad-paiu> One more small question... if I set the SLAVE_OK flag to a query directed to a mongos.. will the mongos know to direct the query to a slave ?
[16:45:56] <bartzy> NodeX: So generally if I store inside a collection a lot of 50K-1M files
[16:45:58] <bartzy> then I delete most of them
[16:46:30] <bartzy> The storage space stays the same - but when I then insert the same amount of files again - will mongo allocate more files - or just use the existing ones.
[16:47:43] <Nodex> you should consult the docs for that answer, there is arguments for new allocations and arguments for reclaiming space
[16:48:14] <Nodex> It's not something that I've ever looked at because my deletes are few
[17:33:33] <bartzy> NodeX: I didn't find stuff in the docs about that
[18:40:21] <baegle> Hello, not sure how to craft this query in google and haven't been able to figure it out from the docs. Is there some way, instead of writing the value "6" that I could write an object value AND reference a property in it for the same effect? Example: db.collection.find({x:{frank:6}.fank});
[18:46:08] <kali> baegle: the closest you can do is: find({"x.frank":6 }, { "x.$.fank" : 1 })
[18:46:16] <kali> baegle: (you need 2.2+)
[18:47:13] <baegle> kali: is there a way from inside the shell to check the version?
[18:47:55] <kali> baegle: db.serverStatus().version
[18:48:12] <baegle> I have 2.2.2
[18:48:24] <kali> then it should work
[18:50:32] <baegle> I'm VERY new to MongoDB, so please bear with me. The shell is reporting that find is not defined. Do I have to do db.collectionX.find?
[18:51:36] <kali> yes.
[18:51:48] <kali> sorry, i should have paste the whole line
[18:55:56] <salentinux> Hi guys, is there way to get results order by ascending discance when use $near or $within?
[19:08:38] <baegle> kali: I'm confused, I'm not sure your example does what I was trying to do, but I'm having trouble deciphering it
[19:09:24] <baegle> kali: if I do db.coll.find({"x.frank":6}, {"x.$.frank":1}); what does the 1 represent? Where does that come in?
[19:26:01] <baegle> I think I figured out what I need: ({x:"5"}).x; returns 5
[20:24:51] <Leeol2> How would you query for documents where X: 'X', and if there is more than one match then also check Y: 'Y', and etc. The goal would be to get a single document, by compare as few fields as required.
[20:25:05] <Leeol2> (Keywords/etc for me to Google would be great, thank you)
[20:58:31] <bean> Sounds like a use for map reduce
[21:03:32] <kali> or multiple queries
[21:41:54] <xtat> you guys know of a tool that can convert the journal into something I can read?
[21:45:35] <jmpf> kinda confused on config servers - you have to cp the entire dbpath over first before you convert your replica set to sharded replica set? - ours is 300G - that means we have to rsync 300G to 3 servers??
[21:46:09] <kali> jmpf: you're indeed very confused :)
[21:47:09] <jmpf> kali: looking @ http://docs.mongodb.org/manual/tutorial/manage-sharded-cluster-config-server/ it seems pretty clear there but then I read about only metadata needs to be on config servers
[21:48:12] <kali> jmpf: make sure you understand every single word of http://docs.mongodb.org/manual/core/sharded-clusters/ begore going any further
[21:51:17] <jmpf> kali: yeh, I've been reading it but I'm still confused - the dbpath on our replicated mongods doesn't appear to be just metadata - that's what I don't understand? it's not clear whether or not we have to cp that over
[21:53:21] <kali> jmpf: honestly, you're so confused i feel reluctant to give you a yes/no answer. you haven't grasped the basics stuff, you'll shoot yourself in the foot.
[22:16:51] <_sri> what's the easiest way to force a command reply with QueryFailure flag set and $err value? :)
[22:21:11] <_sri> ah, a ->find('$or' => []) did the trick
[23:13:55] <bartzy> When inserting a document , will it be in RAM immediately if I "find" it , or only after the first find ?