PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 12th of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[03:13:49] <kotedo> Is there a functioning Erlang driver for MongoDB that supports a ReplSet ?
[03:20:47] <joannac> kotedo: https://github.com/TonyGen/mongodb-erlang suggests it supports replica sets
[03:21:03] <joannac> oh as does https://github.com/mongodb/mongodb-erlang
[03:28:03] <kotedo> joannac: I think I tried these drivers and they both seem to fail ... Are they compatible with Erlang V17 ?
[03:31:45] <joannac> Not sure, sorry. You could file a ticket in the erlang project https://jira.mongodb.org/browse/ERLANG
[03:32:00] <joannac> I'm not sure if there's active work on it right now though
[03:38:17] <kotedo> joannac: Yeah, the (previous Erlang) mongodb driver was pretty bad ...
[06:35:11] <Viesti> Hi. I'm trying out Bulk write API to speed up importing data to a 3 node replica set
[06:36:01] <Viesti> running locally there's a noticable difference when compared to mongoimport tool, which seems to do import one document at a time
[06:36:23] <Viesti> now running this in a development cluster, this wasn't any faster :/
[06:37:05] <Viesti> I have 700 files, each containing about 20 000 json documents
[06:37:58] <Viesti> seems that when secondaries replicate updates from master, the update/s speed from mongostat drops
[07:05:53] <Viesti> is there a way to modify the iops limit of an attached volume?
[07:08:59] <rspijker> Viesti: the “iops limit"?
[07:20:50] <Viesti> https://www.evernote.com/shard/s19/sh/0c8da417-31f9-4d6e-8d60-7c3029e10e88/c347c42fd1377de5ac9c6e5eca99f2f8/deep/0/development-primary.png
[07:22:09] <Viesti> https://www.evernote.com/shard/s19/sh/07651dde-acf3-4af1-9088-72a3d2fc8eba/f4bde73ef4631246bd69bd4ee6be7221/deep/0/production-primary-data-volume.png
[07:32:02] <rspijker> Viesti: that looks more like an amazon question
[07:48:19] <Viesti> yep...
[07:51:39] <Viesti> so at best run times I'm seeing something like this: http://pastebin.com/raw.php?i=UPUTZGvf
[07:52:50] <kali> Viesti: no, you can't alter the iops limit, you need to drop the block and recreate it
[07:53:30] <kali> Viesti: for what it's worth, not provisioning may actually be an option: un provisioned iops disk gets higher iops peak, but there is no warranty
[07:53:59] <kali> Viesti: we are not 100% convinced of the benefits of iops provisioning, here
[07:54:44] <Nopik_> hi. if i have collection with index on { a: 1, b: 1 } fields, will adding { a: 1 } and/or { b: 1 } indexes improve query performance? or such fully overlapping indexes makes no sense and just take time to build with no profits?
[07:56:28] <Viesti> here's another run, I'm guessing that reads from secondaries slow down writing: http://pastebin.com/raw.php?i=EsXwQL3D
[07:56:38] <Viesti> I might be totally wrong though :)
[07:57:39] <Viesti> kali: yep was wondering that too, since is the iops a kind of upper limit also (I'm supposing that it should guarantee iops levels too?)
[07:57:49] <Viesti> might be wrong again though :)
[08:09:29] <Zelest> frodo_baggins, http://de2.eu.apcdn.com/full/122279.gif .. is that you? :o
[08:37:27] <djlee> Hi all, we're looking at setting up just a single large instance for mongo on AWS until we get chance to look at replica sets (load of scaling work to everywhere, and not enough time before deadline). We're going for a memory optimised instance for our redis instance. But what is best for mongo? Should we be preferring cpu over memory or vica versa. Or should we be looking at a more general purpose machine.
[08:38:03] <djlee> We'll probably use EBS instead of the instance storage as instance storage is i believe volatile, so ive discounted all storage optimised instances
[08:40:45] <kali> djlee: memory first
[08:40:55] <kali> djlee: then iops, then cpu
[08:41:04] <kali> djlee: ymmv
[09:03:13] <djlee> sorry kali, pointless team meeting caught my attention
[09:03:19] <djlee> cheer for the advice kali :)
[09:03:22] <djlee> cheers*
[09:54:50] <rasputnik> i'm building a replica set from a mongo with several interfaces. set builds, but the primary is connected via the wrong IP. Anyway to change that?
[09:55:54] <rasputnik> the 2 secondaries show their IPs in rs.status() but the node I created the set on appears as its hostname.
[10:03:20] <lxsameer> hey guys B embedded in A, Can i make query on B to get A ?
[10:15:06] <djlee> lxsameer: use dot notation "db.users.find({ 'names.firstname': "lee" });" for example
[10:15:32] <lxsameer> djlee: what if the was a C document which is embedded in B
[10:16:09] <djlee> lxsameer: assuming you have the data, why not try adding another ".field" and seeing if it works or not?
[10:16:49] <lxsameer> djlee: you mean like names.firstname.something ?
[10:17:39] <djlee> lxsameer: yep
[10:17:47] <lxsameer> djlee: thanks
[10:37:29] <lxsameer> can i index a embedded document field to be unique in entire collection ?
[10:43:14] <joannac> yes?
[10:44:25] <joannac> why wouldn't you be able to?
[10:45:15] <lxsameer> joannac: does the normal uniqueness index is unique in entire collection ?
[10:45:23] <joannac> yes
[10:45:50] <joannac> what else could it be?
[10:46:19] <joannac> (not being snarky, is this a docs problem? is it actually not clear)?
[12:14:57] <ev0x> hi guys
[12:15:17] <ev0x> i have a db which is getting slower and slower
[12:15:36] <ev0x> i want to create an index on a field but it is an array
[12:15:51] <ev0x> is it ok to create a hashed index on an array?
[12:16:12] <kali> ev0x: hashed index are only meant for sharding. why do you want it hashed ?
[12:16:30] <kali> ev0x: show us a typical query and a typical document
[12:16:34] <ev0x> i just want an index in general
[12:16:44] <kali> ok. forget about the hashed but
[12:16:46] <kali> bit
[12:17:11] <ev0x> my query uses a field and i wanted to index the field in hope it would speed up
[12:17:26] <kali> suez. show us a typical query, and a typical doc :)
[12:17:34] <kali> s/suez/sure/
[12:17:46] <ev0x> around 55million records
[12:17:50] <ev0x> ok one sec
[14:18:41] <michaelchum> Hi, I would like to transfer the physical location of my mongo database to another hard drive (on the same server) can I just copy the content of my dbpath to the new drive and redirect mongod to the new path?
[14:19:23] <cheeser> if you shutdown, etc., properly that should be fine.
[14:19:59] <michaelchum> Oh ok thanks cheeser!
[14:53:12] <saml> how can I find docs whose arr property contains ['a', 'b'] ?
[14:53:42] <saml> {"arr": ["b", "a", "c"]} GOOD, {"arr": ["b"]} BAD
[14:53:59] <saml> coll.find({arr:{$all:['a', 'b']}}) won't do
[14:54:40] <saml> i'm wrong it does
[14:55:16] <Nodex> both a and b or exactly a and b?
[14:55:42] <Nodex> both = $all . exaxlty = ['a','b']
[14:56:09] <saml> yah $all was what i want
[14:56:15] <saml> it even uses index!
[15:08:00] <gancl_> ASchema={item:[BSchema]},ASchema.findOne({item._id=xx}) it gets a array, how to get only one item?
[15:11:13] <gancl_> Hi! how to get only one item of a subdocument? ASchema={item:[BSchema]},ASchema.findOne({item._id=xx}) it return a array
[15:12:12] <gancl_> I use mongoose
[15:13:53] <rspijker> findOne should not return an array
[15:14:28] <rspijker> I don’t know mongoose… it kind of looks like your Aschema collection contains documents that consist of a single field which has an array as a value though
[15:14:53] <rspijker> you sure that you aren’t secretly doing .findOne(…).item ?
[15:24:01] <xelar> hi folks!
[15:25:31] <berkley> Is it possible to do a geospatial query if I have to separate longitude and latitude fields instead of a single location field? Sorry, I'm very new to this.
[15:38:19] <gancl_> I've also asked here http://stackoverflow.com/questions/24187947/how-to-get-only-one-item-of-a-subdocument-in-mongoose
[15:49:03] <Nodex> gancl_ : look at the positional operator
[15:49:08] <Nodex> as a projection
[15:49:46] <Nodex> http://docs.mongodb.org/manual/reference/operator/projection/positional/
[15:49:47] <gancl_> I don't understand.
[15:50:46] <Nodex> findOne({'item._id': xx},{item.$:1})
[15:50:49] <Nodex> somehting like that
[15:54:05] <gancl_> Nodex: But I don't know it's the 1st one or the 2nd one
[15:54:16] <Nodex> 1st one what?
[15:54:46] <gancl_> ,{item.$:1}
[15:54:59] <Nodex> I don't understand what you're asking
[15:56:47] <gancl_> Lists.findOne({"item.$._id": itemId It won't find any result
[15:56:57] <Nodex> that's NOT what I put
[15:57:14] <Nodex> Lists.findOne({'item._id': xx},{item.$:1})
[16:01:27] <gancl_> Nodex: Thanks. It really get the item
[16:01:36] <Nodex> :)
[16:38:30] <niklask> Hello. I am writing my own CMS, for multiple websites, and is wondering what the best solution is; 1 collection per website created in the CMS, or store all website data in the same collection. Any thoughts?
[16:39:58] <magglass2> I'd use the same collection(s) for all the sites but have a field you can query on to get results specific to a certain site
[16:40:04] <magglass2> niklask: ^
[16:40:55] <niklask> Alright, thanks. That's how I've written it atm, was just wondering if multiple smaller collections would increase performance.
[16:52:30] <cheeser> niklask: having worked on a large CMS, having one collection a websiteId (as we called it) worked rather well.
[16:55:39] <niklask> cheeser: I'm just thinking about text indexes. If websiteA wants a blog, where you can do text searches on body and subject fields, but websiteB wants to index 1 more field, then it would be better to split it up in different collections - right?
[16:56:05] <cheeser> i don't know. we used solr for that.
[16:56:35] <niklask> Okay, thanks for your time.
[16:56:41] <cheeser> sure
[20:24:18] <talbott> hey mongoers
[20:24:43] <talbott> quick q for you. Is it possible to add a new field to all my (10 million) docs that is the sum of two other fields?
[20:24:46] <ppalludan> hey talbott
[20:24:51] <talbott> hello ppalludan
[20:26:19] <talbott> either as a dynamic field or a real field
[20:26:32] <talbott> (i dont think there are such things as dynamic fields in monog though?)
[20:35:08] <ppalludan> talbott: wish I could help, but all too noob still :)
[20:46:22] <talbott> no worries
[20:46:26] <talbott> i think i found an answer
[21:02:51] <hocza> If I have a Finance application, does it useful for mongoDB to store every registered users' data in separate database, or it does not matter.
[21:12:34] <cheeser> i wouldn't use separated databases, no.
[21:20:21] <hocza> cheeser: thanks :)