PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 27th of May, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:37:30] <wladston> Guys, I just created a python-based abstraction layer
[00:37:36] <wladston> https://github.com/wladston/manga
[00:37:45] <wladston> it's available in pip : pip install manga
[04:55:30] <quang> hi
[04:55:33] <quang> anyone here?
[04:55:43] <quang> i need help making a capped colleciton
[05:54:20] <Sb8244> Hi
[05:55:01] <Sb8244> Having issues setting up a simple geo index
[05:55:19] <Sb8244> No matter what I do, I always get error 13038 from querying
[05:55:50] <Sb8244> I'm following the 2dsphere indexguidelines
[06:05:42] <venturecommunist> what's the simplest way to find the number of records of a query?
[06:07:53] <venturecommunist> in javascript i might say db.somecollection.find({attribute: "attribute"}) which works fine but what i'd really like is db.somecollection.find({attribute: "attribute"}).numberofhitsigot
[06:08:32] <venturecommunist> do i have to index and explain (something i'm just learning about) or is there a simpler way
[06:09:01] <sb8244> hey I'm back and looking for geo indexing help :D
[06:09:29] <venturecommunist> oh i think i found it it's .count()
[06:15:32] <sb8244> okay s
[06:15:39] <sb8244> I'm saving a document that's like this:
[06:15:40] <sb8244> db.test.find( { loc : { $near : { $geometry : { type : "Point" , coordinates : [ 126.93521976470947 , 36.80358081325186 ] } , $maxDistance : 1000 } } } )
[06:15:52] <sb8244> ugh sorr ythat failed
[06:16:01] <sb8244> I'll make a pastebin because I can't copy from my VM to my host
[06:17:44] <sb8244> pastebin.com/QVG2d8KP
[06:18:07] <sb8244> Holy crap I fail: http://pastebin.com/QVG28dKP
[06:18:38] <sb8244> okay so going with my last message, can anyone help me figure out what the hell is going on? I can't see what I'm doing wrong
[06:58:48] <sb8244> oh I fixed it! Was on 2.0.4 but needed to be on 2.4 for 2dsphere indexes / queries
[07:39:23] <[AD]Turbo> ciao all
[09:14:42] <armetiz> hi there
[09:14:52] <armetiz> i'm benchmarking GridFS for serving static file througt nginx
[09:15:09] <armetiz> I have benchmarked static vs nginx-gridfs module
[09:15:26] <armetiz> nginx-gridfs directly connect nginx to mongod server replicaset
[09:15:43] <armetiz> nginx-gridfs is definitly slow compared to static serving files
[09:16:30] <armetiz> But I guess the problem come from nginx-gridfs which doesn't cache chunks and compute request and chunck compilation
[09:16:38] <armetiz> which isn't really efficiency
[09:17:29] <armetiz> And before patch nginx-gridfs with a cache system using md5 of fs.files, I wondering if nginx -> gridFS FUSE -> gridFS can't be a good solution
[09:17:53] <armetiz> so, do you know the bets solution to mount gridfs ?
[09:17:59] <armetiz> bets = best
[09:21:07] <kobigurk> hello :-)
[09:39:02] <maasaki> i used mongoose with Node JS. I want to have sub documents inside the parent document. I want to have child document as separate collection. I declared like 'children: [childSchema]' inside parent schema. then i save it like var parent = new Parent({ children: [{ name: 'Matt' }, { name: 'Sarah' }] })
[09:39:12] <maasaki> parent.save(callback);
[09:40:00] <maasaki> but child documents are not stored in the sperate collection. It is stored inside the parent documentn itselft. Why it that happening?
[09:40:54] <jareiko> is there a MongoDB query modifier that acts like unix 'uniq'?
[09:41:46] <jareiko> I'm making a high score table, and I'd like to get the best N scores, but with each player only appearing at most once
[09:42:58] <jareiko> I could query a larger number of items and do my own filtering
[09:43:06] <jareiko> but I wouldn't know how many items I need
[09:57:44] <maasaki> how to store sub documents in separate collection using mongoose?
[09:59:04] <ron> if you store them in a separate collection, they are not sub documents...
[10:15:53] <guest1924> Do I need to perform any action after I update my collection with Mongo pecl driver?
[10:16:03] <guest1924> I am not able to query database
[10:16:22] <guest1924> it keeps on returning info DFM::findAll(): extent 0:2000 was empty, skipping ahead.
[10:17:23] <guest1924> Could anyone with knowledge of mongodb and php help me?
[10:17:25] <guest1924> thank you
[10:39:12] <Striki> anyone else having issues with the mongodb ubuntu repository?
[10:39:20] <Striki> deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
[10:54:13] <remonvv> \o
[11:12:22] <maasaki> how to store sub document in a separate collection while saving the parent collection?
[12:29:09] <EmmanuelJacobs> Hi guys, trying to understand a strange behavior using mongodb 2.4.3 win32.
[12:29:42] <EmmanuelJacobs> I'm inserting documents using a stored function as one of the parameter and it seems that function is called several times at each insertion.
[12:30:18] <EmmanuelJacobs> I have a counter initialized like this: db.counters.insert( { _id: "uqid", seq: NumberLong(0) } );
[12:31:17] <EmmanuelJacobs> I have a stored function named getUqid which is function () { var ret = db.counters.findAndModify( { query: { _id: "uqid" }, update: { $inc: { seq: NumberLong(1) } }, new: true } ); return ret.seq; }
[12:31:36] <EmmanuelJacobs> When I do three insertion like this: $conn->test->ads->insert(['qid' => new MongoCode('getUqid()') , 'name' => "Sarah C."]);
[12:31:56] <EmmanuelJacobs> I get something like this: > db.ads.find()
[12:31:56] <EmmanuelJacobs> { "_id" : ObjectId("51a34f8bf0774cac03000000"), "qid" : 17, "name" : "Sarah C." }
[12:31:56] <EmmanuelJacobs> { "_id" : ObjectId("51a34f8bf0774cac03000001"), "qid" : 20, "name" : "Michel D." }
[12:31:56] <EmmanuelJacobs> { "_id" : ObjectId("51a34f8bf0774cac03000002"), "qid" : 23, "name" : "Robert U." }
[12:32:42] <EmmanuelJacobs> Anything I did wrong ? Would anybody have any clue understanding this ?
[12:48:39] <EmmanuelJacobs> Nobody ?
[13:05:41] <cofeineSunshine> hi
[13:18:50] <kobigurk> We want to move to MongoDB on Azure for our production system
[13:19:06] <kobigurk> I'd really like to talk to someone who already did it
[13:19:12] <kobigurk> is there anyone like that here?
[13:46:04] <patcito> hey
[13:46:44] <patcito> if I have the value of an _id, is it possible to get the latest 5 documents in a collection starting from that _id?
[13:47:31] <ron> doubtful.
[13:53:54] <MatheusOl> patcito: you have that one the next 5 ones?
[13:54:08] <patcito> no
[13:54:22] <MatheusOl> so?
[13:56:53] <patcito> figured it out (using mongoid) File.where(:_id=>{:$lte=>doc._id}).sort(_id:-1).limit(5)
[13:58:55] <ron> that would work if you don't use a sharded database.
[14:28:42] <failshell> hello. when i disable the balancer on a config node and then run sh.getBalancerState() on a replica set node, it should return false right?
[14:28:50] <kobigurk> regarding Azure on MongoDB - I want to use the RoleEnvironment.Changed event and reconfigure mongo as to the number of replica members existing there. It's relativly simply - is there any reason it wasn't implemented? Am I missing something?
[14:28:57] <kobigurk> MongoDB on Azure* :D
[14:34:24] <baniir> anyone running raid10 ebs setup with ubuntu? I set up according to the docs at http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-amazon-ec2/ but am getting readahead warnings from mongo; i think the readahead settings are not persisting through reboots
[14:36:25] <failshell> hello. when i disable the balancer on a config node and then run sh.getBalancerState() on a replica set node, it should return false right?
[14:50:19] <ExxKA> Hey Guys, I have a question on mongoose that I hope some of you can answer. When I build up an object graph, I later experience that at a certain depth, only the object id's are returned, not the actual object. Why is that?
[15:35:46] <ckd> You around, Derick?
[16:08:05] <eryc> fyi its a holiday in the US
[16:13:19] <OliverJAsh> I want to build an application similar to Twitter, and I'm looking for advice/help/recommendations on how to engineer the back-end (in terms of collections, replication, etc.)
[16:14:09] <OliverJAsh> Currently the `User` document has an array on it called `followingUsers`. When I want to load that user's timeline, I look for `Tweet`s that contain the user IDs in that array/
[16:14:22] <OliverJAsh> However this is surprisingly slow.
[16:14:43] <ckd> isn't Derick UK? or am I thinking of Hannes
[16:15:30] <kali> europe. not sure about uk.
[16:20:47] <hydrawat> Anyone build mongo 2.4.3 on osx with ssl support?
[16:25:25] <double_p> i hate the connection drops for rs.reconfigure() :-( is that contained within shards aswell?
[16:26:46] <Number6> double_p: For a sharded replica set?
[16:27:09] <Number6> the reconfig calls an election - which is why the connections drop
[16:31:34] <double_p> i know when it happens, but i didnt read much about sharding so far. just curious..
[16:31:58] <double_p> the app servers really dont like that "only some (micro)seconds" loss .. or let's say, customer does not :)
[16:34:39] <double_p> just the point, that a backup/DR-site slave is sometimes disconnected (vpn drop). and even with a replSet of main/local-slave/dr-slave with a priority 3/2/1 they always start to vote when dr-slave goes dark. and i want to contain that
[18:07:08] <xuewenzhi> Hi, I have a feed system using fan-out on write. I keep a list of feed ids in redis, and save the feed content in mongodb, so every time when i read 30 feed, i have to do 30 query to mongodb, is there anyway to improve it ?
[18:12:00] <redfox_> hi all. Im having a document and want to remove a subdocuments object. Is an update({}, $pull: { _id: ... }, { multi: true }) as performant as an update({ _id: ... }, { $pull: { subdoc: { _id: ... } } } ) ?
[18:15:13] <redfox_> in other words: how does it affect performance to $pull a subdocument with searching through the whole collection, of with selecting the parent document first?
[18:17:29] <tncardoso> I am experiencing high memory usage in my server. I have a machine with 17GB RAM, and it is using 14GB resident and 60GB virtual. My main concern is the non-mapped virtual memory usage of 14GB. All my databases and collections use less than 8GB. What are the possible causes?
[18:19:49] <double_p> indexes?
[18:19:50] <kali> tncardoso: start there: http://docs.mongodb.org/manual/faq/storage/
[18:22:44] <tncardoso> double_p: kali thanks for the answer. I've already checked this link. The problem is that all my indexes and all the storage have 8GB. In theory I could fit all my working set in memory.
[18:23:11] <kali> tncardoso: what do you mean "have 8GB" ?
[18:23:45] <tncardoso> kali: all collections storageSize + indexTotalSize <= 8GB
[18:23:54] <double_p> tncardoso: pick it that way. mongo is a mem-hogger (for virtual). we upgraded from 8G to 96G. and mongo wasnt shy to allocate that
[18:23:58] <double_p> :)
[18:24:07] <tncardoso> hehe
[18:24:18] <tncardoso> and the non-mapped virtual memory? is that ok?
[18:24:20] <kali> tncardoso: how big is the database on disk ?
[18:25:17] <tncardoso> kali: all dbpath takes 26GB
[18:25:41] <kali> there you go
[18:25:43] <double_p> it was 8gig 10min ago
[18:25:51] <kali> mongodb maps all this to RAM
[18:25:52] <kali> twice
[18:26:20] <double_p> yeah, for 'v'
[18:26:31] <tncardoso> kali: and what about the non-mapped virtual memory?
[18:27:13] <double_p> the point in case, 6 fold the mem. mongo goes there... "spread" -- but speed? well :D
[18:30:15] <kali> how do you measure this 14GB ?
[18:31:02] <double_p> kali: du?
[18:31:19] <tncardoso> kali: mms
[18:31:39] <double_p> and for the record, my 45GB disk db is using that 98 happily :D
[18:32:02] <kali> double_p: yeah, usually it's about twice
[18:32:11] <double_p> f(index)
[18:32:50] <ron> finger index?
[18:33:14] <double_p> function/relation to count of index
[18:33:28] <double_p> short, there's no panacea
[18:33:40] <tncardoso> kali: it seems strange to have only 100 connections using 14GB non-mapped
[18:34:19] <double_p> "vmem, here i come"
[18:34:46] <kali> tncardoso: what OS ?
[18:34:56] <ron> DRAGON!
[18:35:19] <double_p> ..flybsd?
[18:35:49] <tncardoso> kali: ubuntu 12.04 x86_64
[18:36:20] <double_p> tncardoso: ubuntu wasnt invented to be "server" :)
[18:37:47] <ron> ubuntu wasn't "invented".
[18:37:49] <kali> tncardoso: i don't know where these 14GB comes. i'm pretty sure it's not the 100 connections
[18:38:22] <kali> double_p: ubuntu is not that bad
[18:38:40] <kali> (and i'm a debianist)
[18:38:45] <tncardoso> hahaha I'll try to file a ticket in jira
[18:39:40] <tncardoso> once i had problems with gentoo (the emerge tree got completely broken). bad times
[18:40:02] <tncardoso> double_p: what do you use for your servers?
[18:41:36] <double_p> tncardoso: depends, from openbsd to solaris, and some linue intermixed
[18:41:44] <double_p> linux..
[18:59:25] <baniir> hi. is anyone here running raid10 on ebs? i changed readahead values with blockdev, but i don't think the settings are persisting through reboots
[19:08:31] <qle> hi
[19:13:42] <ixti> hey all
[19:14:09] <ixti> is it possible to debug somehow what BSON sent to mongodb
[19:14:17] <ixti> i have kinda strange issue
[19:14:47] <ixti> collection A has documents with field "b", and it's values are Integers
[19:15:17] <ixti> but once I map-reduce it, values are Floats on destination collection
[19:16:04] <ixti> I clearly understand that JS does not distinct Float vs Integer... it's all Number()
[19:16:10] <ixti> but still, kinda weird
[19:16:27] <ixti> as when i use it in Ruby... well Ruby distinct them :D
[19:17:46] <qle> hi anyone use capped collections with mongo
[19:17:51] <qle> with mongoose
[19:17:52] <qle> rather
[19:18:01] <ixti> according to BSON specs ints and doubles are different types...
[19:21:32] <ixti> looks like Number is treated as \x01
[19:21:47] <ixti> while what I expect is \x12
[19:22:08] <ixti> and to make sure i should return new NumberLong(v) in reduce function
[22:08:52] <arthurnn> when I am draining a shard. the reads from the draining happen on a secondary or in the primary? I would assume it is on the primary so it moves the more consistent data. is that right?
[23:25:48] <meltz> hello
[23:26:02] <meltz> i have a question about backups
[23:27:09] <meltz> anyone here?
[23:38:55] <meltz> ???
[23:41:56] <BurtyB> meltz bit quiet in here atm but people might come to life if they know the Q otherwise hang around ;)
[23:42:06] <meltz> haha ok
[23:42:47] <meltz> my question is what is the best way to do a filesystem level backup on a replica set configuration where each database is on a RAID 10 config
[23:43:04] <meltz> and i need to have 0 downtime
[23:43:26] <meltz> i'm thinking of using a secondary
[23:43:42] <meltz> and because it's on raid 10, do I need to lock it?
[23:43:59] <meltz> also do I need it to be a silent secondary for there to be no downtime?