PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 21st of February, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:02:44] <nahtnam> Hi is anyone here?
[02:03:29] <nahtnam> I am having a little trouble understand what "scalable" database means...
[02:09:27] <nahtnam> nvm
[03:59:43] <wg> If i'm making a threaded comment system, with each comment in its own doc, is an array of objectIDs on parents (articles, say) a good way to reference them?
[04:05:27] <Ferry> Hello
[04:48:07] <nahtnam> Hi!
[05:05:44] <protometa> how does sort work on a lazy cusor?
[05:07:07] <protometa> how does it know what order things go in if it's lazy?
[06:12:00] <caulagi> hello - What does this error mean?
[06:12:01] <caulagi> [MongoError: Can't extract geo keys from object, malformed geometry?:{ type: "Point", coordinates: [ 38.5151758, -121.500241 ] }]
[06:23:56] <caulagi> ah, got it. Should be longitude, latitude. I was using latitude, longitude.
[06:49:41] <a215> if i run an "in" search with an array of 175 ids, when those ids are indexed, should it be taking 5 entire seconds?
[06:58:58] <a215> i'd paste code but it's running through mongoid
[08:04:24] <jrippon> Hi everyone - I wonder if I can get some advice/thoughts on an upgrade/migration strategy for our sharded cluster of mongodb replicasets to move to new servers?
[08:05:56] <jrippon> at present the servers are running mongo 2.0.1, we've three replica sets which we shard our main db collections across.
[08:08:11] <jrippon> unfortunately, both the replica set members and the mongos configdbs have been configured by IP address, so I can't just clone them and update dns - I've added replicas on our new servers to each replicaset, I'd though the best course of action was to start by re-defining the configdbs to use hostnames which I can point at the new servers to migrate them.
[08:11:58] <jrippon> sadly when I brought the stack back up with a hostname for the last configdb in the mongos config (first stage in the "migrating configdb to a new hostname" instructions) one of the three replicasets in db.shards.find() isn't correct, and I get a "warning: bad serverID set in setShardVersion" in my logs
[08:56:46] <stathis> Hi to all, I am new to mongodb and I want to create the following simple schema. I will describe it in relational thinking: 3 tables -> company , product , deals company is related one to many to deals. product is related one to many to deals. So the deals table is the link between company and product It would have the following structure deal_id | company_id | product_id | extra_fields how should I implement this structure
[08:57:17] <stathis> embedded structure?
[08:58:06] <Nodex> really depends on your access patterns
[09:03:16] <Repox> Hi. I can't seem to find in the documentation how I cange a users role? I have a user I'd like to make dbAdmin - should I just add the user again with the right role?
[09:05:48] <stathis> nodex: what do you mean my access patterns?
[09:13:12] <MatToufoutu> sry for the flood people, bouncer issues :(
[09:15:11] <Nodex> stathis : how you intend on accessing your data
[09:15:27] <Nodex> and it has nothing to do with the driver or language
[10:20:50] <Morgawr> hello, pardon if this is a very noobish question but I'm just starting to learn how to use mongodb, I have a collection with some data and I want to modify a single field, when I try to use update({"id":someNumber},{"fieldToModify":"newValue"}) it just overwrites the whole collection
[10:30:11] <Nodex> use $set
[10:30:37] <Nodex> db.foo.update({id:1},{$set:{"key","New Value"}});
[10:30:55] <Nodex> and it's not modifying the collection, it's modifying the document
[10:32:17] <Morgawr> Nodex: ah okay, thanks
[10:32:37] <Nodex> document is more analogous to a row in relational
[10:32:41] <Nodex> collection to a table
[10:32:47] <Nodex> database to a ..... database!
[12:24:54] <movedx> We have a simple cluster: router, three shards and three config servers. When upgrading the MongoDB version, in what order should I shut these down in? I'm thinking Router to stop traffic coming in; shards top stop usage of the config sevrers then the config servers?
[12:25:01] <movedx> Then upgrade and bring back up in reverse order?
[12:32:14] <bobbytek> Hi all, can someone please point me to documentation on nested projections?
[12:32:29] <bobbytek> For example: db.Release.findOne({},{'submissions.projectKey': 1})
[12:32:35] <bobbytek> Which works in the shell
[12:33:23] <bobbytek> However, according to http://docs.mongodb.org/manual/tutorial/project-fields-from-query-results/#projection, "MongoDB does not support projections of portions of arrays except when using the $elemMatch and $slice projection operators."
[12:33:31] <bobbytek> That seems inconsistent
[12:52:40] <scruz> hello. is it possible to do aggregates with group on an array?
[12:58:21] <bobbytek> https://groups.google.com/forum/#!topic/mongodb-user/iIRcEwU0llk
[13:02:44] <Nodex> bobbytek : I think you're confusing slicing with projection
[13:03:28] <Nodex> you will need to use the dollar operator to extract the pice of the array you desire
[13:05:52] <bobbytek> Nodex, I'm not trying to slice an array
[13:06:06] <bobbytek> I simply want a subset of the array elements fields, for all elements
[13:06:20] <bobbytek> I cannot find any documentation on this, but I know it works in the shell
[13:06:30] <bobbytek> I gave an example in the link above
[13:06:52] <Nodex> I didn't say you were trying to slice an arrya
[13:06:55] <Nodex> array*
[13:07:12] <bobbytek> I didn't say you did :)
[13:07:25] <bobbytek> I am just trying to clarify
[13:07:26] <Nodex> anything you can do in the shell can be done in a driver too
[13:07:56] <bobbytek> Sure, but if you are trying to direct a library implementer to support this feature, it helps if you can point them to the docs :)
[13:08:21] <bobbytek> In this case, it is the mysema querydsl for MongoDB
[13:08:54] <bobbytek> https://groups.google.com/forum/#!topic/querydsl/sxVbXA-IQ18
[13:24:11] <scruz> following http://docs.mongodb.org/manual/tutorial/model-tree-structures-with-ancestors-array/. trying to get a query to group books by topic. could anyone help, please?
[13:31:15] <Nodex> scruz : please gist/pastebin your code
[13:32:43] <tscanausa> is there a way to change which shard a database is on?
[13:33:14] <cheeser> you can copy a database to another host.
[13:33:23] <cheeser> db.copyDatabse()
[13:35:18] <scruz_> sorry, internet toggled off there
[13:38:06] <scruz_> again, i want to use the book example from the documentation, since it's really close to my use case.
[13:38:46] <Nodex> please pastebin what you have done so far
[14:01:14] <scruz_> Nodex: https://dpaste.de/zsb6
[14:01:33] <scruz_> i'm afraid it just shows i don't understand the idea
[14:02:12] <Nodex> you can probably do it better with aggregation
[14:05:27] <scruz_> what i really want to do is something like 'give me all Stations that are in Chicago', or 'give me all Stations in Nevada'.
[14:09:14] <Nodex> can you pastebin a typical document?
[14:13:29] <scruz> https://dpaste.de/2iaP
[14:14:27] <scruz> if, for instance, i want to get all Motels in Kenya, i'd also like to group by District, for instance.
[14:39:04] <fdask> hey all. wondering what the best option is for accessing mongo from lua?
[14:46:27] <MmikePoso> Hi. I can't find confirmation for this in the manual - when the secondary instance in replset is 'lagging' behind the primary, it will not respond to queries - during that time it is in startup2 state
[14:46:33] <MmikePoso> but where can I configure the lag amount?
[16:39:16] <wc-> hi all, im trying to forecast some mongo data / memory usage
[16:39:31] <wc-> do index sizes tend to scale linearly as data is added? (assuming we dont add new indexes)
[16:40:36] <wc-> and when we add new data, i know mongo reserves disk space in larger and larger blocks, anyone know any good documentation on that? id like to learn more about it
[16:41:15] <Joeskyyy> This page covers all that pretty well iMO http://docs.mongodb.org/manual/faq/storage/
[16:41:31] <Joeskyyy> Addresses the size of index and the file buffer
[16:45:05] <wc-> ok thank you
[16:45:09] <wc-> this is perfect
[16:45:47] <Joeskyyy> No prob
[17:19:15] <wc-> anyone have recommended aws instances for mongo?
[17:19:24] <wc-> im currenyl pricing out memory optimized instances
[17:19:38] <wc-> just wondering if anyone has had previous positive experiences with certain instance types
[17:19:52] <cheeser> http://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
[17:21:03] <Nodex> not cheap on AWS lol
[17:24:25] <Joeskyyy> Shameless promotion of Rackspace here (I'm a Racker).
[17:24:50] <Joeskyyy> Object rocket is pretty neat as well, again, bit biased since they're owned by Rackspace
[17:26:47] <Nodex> not a fan of any cloud stuff tbh, find it very expensive for what it is
[17:27:03] <Nodex> cloud -> provisioned*
[17:27:50] <Nodex> each to their own I suppose
[17:27:59] <Joeskyyy> Eh, depends a bit on what you're doing.
[17:28:21] <Joeskyyy> Shared environments can be pretty terrible for REALLY intensive workloads, but with the scalability of mongo, the cloud kinda plays nice.
[17:28:35] <Joeskyyy> And virtualization is getting better and better with making sure resources are properly fenced off.
[17:28:51] <Joeskyyy> No one likes a noisy neighbour (:
[17:28:54] <Nodex> still expensive cmpared ot bare metal
[17:28:59] <Nodex> compared*
[17:29:22] <Joeskyyy> Again, depends a bit on what you're doing. But I get the counter point.
[17:29:32] <Joeskyyy> If you're using the cloud in a "cloudy" way it's less expensive.
[17:29:46] <Joeskyyy> Meaning spinning up and tearing down constantly, and only leaving the bear minimum up for what you need.
[17:29:52] <Joeskyyy> Then bursting up new instances when you need em.
[17:30:02] <Nodex> if you're spinning up machines to deal with load like netflix do then it's probably cheaper in the ling run
[17:30:12] <Nodex> long*
[17:30:16] <Joeskyyy> That's why a hybrid environment is the bee's knees. Having your bare metal constantly running, then bursting into the cloud when you need it.
[17:30:47] <Nodex> tbh we just have infrastructure that does queue processing and backups but can handle load when needed
[17:30:58] <Joeskyyy> That's the most optimal, I agree.
[17:31:14] <Joeskyyy> Have a nice redis/RabbitMQ/whatever take all your things and put them in over time.
[17:31:15] <Nodex> but we're in between small and large so an edge case
[17:31:37] <Joeskyyy> Right, it may not work as well for someone who needs data as soon as it's sent away.
[17:31:43] <Nodex> not worth putting it all in the cloud and not worth having racks in DC's
[17:31:47] <Joeskyyy> But if that's the case, you're probably gonna run into issues regardless :P
[17:32:04] <Nodex> tbh for cold storage we do use the "cloud"
[17:33:07] <Joeskyyy> haha, ah the almighty "cloud" and it's multiple meanings
[17:33:25] <Nodex> I was doing "cloud" ten years ago before it was trendy
[17:33:34] <Joeskyyy> Hipster Cloud
[17:33:52] <Joeskyyy> Nodex, btw, if you haven't seen ObjectRocket in action is pretty damn neat. Closest you can get to baremetal in virtualization.
[17:33:53] <Nodex> yeh lol, cloud CRM before / around same time as salesforce etc
[17:34:01] <Joeskyyy> They use FusionIO cards, and hot damn are they fast.
[17:34:06] <Nodex> I've not seen or heard of it
[17:34:09] <Joeskyyy> Writing to disk is pretty much like writing to RAM
[17:34:13] <Nodex> sweet
[17:34:32] <Joeskyyy> They have a free trial on their site, I shamelessly suggest trying it out to see what it's all about.
[17:34:36] <Nodex> been checking out docker a lot recently, trying to "containerize" for ease
[17:34:42] <Joeskyyy> Yisssss docker <3
[17:34:53] <Joeskyyy> ObjectRocket uses containerizing actually on their baremetal for provisioning
[17:35:03] <Joeskyyy> OpenVZ containerizing, so very very very little overhead.
[17:35:36] <Nodex> it's pretty sweet and easy to spin an AMI up or w/e and your stack is there without to much hassle
[17:35:40] <Joeskyyy> I built a simple mongoDB docker container I pushed to the repos, I never actually load tested it to see if it actually works haha
[17:35:43] <Nodex> no need for chef etc
[17:35:50] <Joeskyyy> right, thats why I love docker.
[17:35:52] <Joeskyyy> Dat repo
[17:36:00] <Nodex> can't wait for it to become 1.0
[17:36:15] <Joeskyyy> They were actually just at Rackspace a few weeks ago and Iw atched em present on a bunch of stuff
[17:36:19] <Joeskyyy> really smart guys there.
[17:36:27] <Nodex> for real
[17:36:41] <Nodex> nice guys from what I can make out
[17:36:44] <Joeskyyy> Like, I'm pretty technical and yeah, eyes glazed over a few times haha
[17:37:27] <Nodex> I hope this year sees a lot more of it
[17:37:42] <Joeskyyy> Beware, they're already calling containers "Cloud 3.0"
[17:37:44] <Joeskyyy> x_o
[17:37:51] <Nodex> hahaha
[17:37:57] <Joeskyyy> I'm not joking sadly lol
[17:37:59] <Nodex> newest hipster word incoming!
[17:38:08] <Joeskyyy> heard the docker guys call it that, and the ZeroVM guys call it that.
[17:38:13] <Nodex> I can already see the t-shirt with the consolas font!
[17:38:17] <Joeskyyy> Then it caught on with our CEO, so expect Rackspace to say it too :P
[17:38:36] <Nodex> "Docker \n Cloud 3.0!"
[17:39:04] <Joeskyyy> Docker does have some pretty bad ass shirts, got one with their whale on it when they were here.
[17:39:12] <Nodex> nice
[17:39:13] <Joeskyyy> Shirts are like trading cards at Rackspace
[17:39:23] <Nodex> haha
[17:40:09] <wc-> how cna i use serverStatus.workingSet.pagesInMemory and serverStatus.workingSet.overSeconds to calculate my working set size?
[17:40:17] <wc-> "Use pagesInMemory in conjunction with overSeconds to help estimate the actual size of the working set."
[17:41:08] <Joeskyyy> db.serverStatus().workingSet.pagesInMemory should do it
[17:41:27] <Joeskyyy> That's assuming you have a workingSet in memory that is
[17:42:01] <wc-> so take that pagesInMemory * 4 give me the size of the working set in kilobytes
[17:43:12] <Joeskyyy> I'm not 100% on that actually, most things in mongo are represented in bytes.
[17:43:49] <Joeskyyy> ah, i see your question
[17:43:50] <Joeskyyy> yeah.
[17:43:59] <Joeskyyy> "The default page size is 4 kilobytes: to convert this value to the amount of data in memory multiply this value by 4 kilobytes."
[18:25:25] <ekristen> having some problems with the mongodb node.js driver
[18:25:38] <ekristen> trying to get secondaryPreferred reads working
[18:26:00] <ekristen> but I keep seeing in the logs on my master that readpreference mode is primary
[18:26:09] <ekristen> 1.3.23 mongodb driver version
[19:06:56] <synth__> in php, how would i update a subdocument if my document is like: ----> { "_id" : ObjectId("52e074d91bcfc5ca4dcb57c2"), "description" : "", "email" : "", "extra" : { }, "hide" : false, "mobile" : "", "name" : "", "phone" : "", "photo_url" : "", "uid" : "josh.childers", "visibility" : { "description" : 1, "email" : 1, "phone" : 1, "mobile" : 1 } } <---- and let's say i want to update
[19:06:56] <synth__> visibility["mobile"] to be 0
[19:30:21] <paulkon> how many index intersections can be done for query?
[19:30:38] <paulkon> in the docs it implies that only one intersection can be performed per query
[19:37:44] <cheeser> in 2.4, only one index can be used.
[19:38:18] <wc-> for mms monitoring, is it standard torun just 1 monitor process
[19:38:19] <wc-> for a replica set
[19:38:26] <wc-> or run one on every mongo instance
[19:38:59] <cheeser> i'd imagine you want 1 per instance
[20:05:24] <royiv> Is there any reason MongoDB's virtual size would be less than the RSS?
[21:34:48] <Guest19282> does mongodb have multilingual support ? i wish to store data in hindi and tamil (Indian languages) ?
[21:54:44] <kali> Guest42488: strings values are stored in utf-8, so assuming hindi and tamil are unicode, i think you should be fine for storing
[21:54:52] <kali> wrong guest.
[21:55:03] <kali> the right one has left
[21:55:06] <kali> *sigh*
[22:52:12] <ekristen> hey guys need help with node.js mongodb driver
[22:52:26] <ekristen> is there any way to know if my queries are going to the secondary servers?
[23:32:07] <Waheedi> does copydb command works with big databases
[23:32:24] <Waheedi> and how much this command could take time to do for example 1TB database
[23:34:08] <cheeser> duration is a function of so many factors.
[23:39:38] <Waheedi> yeah but would that even work with one 1TB
[23:39:49] <Waheedi> whether its a 2mb cpu or 4000k
[23:40:22] <wc-> not sure if its faster or not but theres also mongodump / mongorestore
[23:40:39] <wc-> no idea which is faster, if i had to guess id say copydb but i dont know
[23:41:09] <Waheedi> copydb seems to work without locking and affecting the current production environment
[23:41:17] <Waheedi> i would guess its the safest way to do it