[00:00:35] <joannac> godzirra: "what's wrong with it" is very vague. Do you see an error? Does it not return what you expect? What's your evidence that something is wrong with it?
[00:57:42] <godzirra> joannac: Sorry, I get an error when I run it, saying that "$near requires geojson point, given { type: \"Point\", coordinates: [ 36.0, -115.0 ] }
[02:05:38] <shlant> anyone using mms: is it possible to have a different instance type for arbiters? Can I deploy 2 members with one type and then an arbiter for the same replset with a t2.micro or something?
[02:21:17] <joannac> shlant: sure, you just need to deploy first, then distribute processes the way you want
[06:47:28] <amitprakash> Hi, i have a collection with 10m records on mongo 2.8 (mmapv1 engine).. what would be the fastest way to restore this to a mongo 3.0 instance(wiredTiger) ?
[06:48:04] <amitprakash> Currently I am syncing by adding the 3.0 instance as a replicaset member, however its taking over an hour to sync
[06:48:11] <amitprakash> Is there a better/faster way?
[09:16:17] <nixstt> is this normal disk IO usage for mongodb 3.0.2? http://i.imgur.com/Y8dTQhy.png
[10:48:12] <pamp> hi, is there any equivalent command for touch in wiredtiger?
[11:25:14] <bogn1> so nixstt, from looking at your disk IO graph, I suppose you're pushing time series data into hour documents. We're doing that and have those sawtooth query runtimes as well. Finding out why is on our list. Seek time shouldn't be that big of an issue with structured hashes having minutes on top-level und seconds below. That's only 59+59 jumps as per MongoDBs documentation. Another candidate is record moves due to padding not being available anylonger in 3.0.2.
[11:26:34] <nixstt> bogn1: yeah pushing data into hourly documents here, noticed this since testing 3.0.2 also having some memory usage issues it’s using way too much memory (much more than specified in the config file)
[11:27:04] <bogn1> 3.x removed the padding factor which preallocated dynamically
[11:27:28] <bogn1> with 3.0.2 you have to take care of this yourself
[11:27:49] <bogn1> or use powerOf2 allocation which dramatically increases collection size
[11:28:02] <bogn1> I think the latter is the default
[11:28:05] <Derick> powerof2 is the default now though IIRC
[11:28:28] <bogn1> but that might be too slow as well
[11:28:55] <bogn1> we're looking at this: http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/
[11:29:03] <bogn1> preallocating the stuff up-front
[11:29:38] <bogn1> no need to incrementally increase the document by powerOf2
[11:31:56] <bogn1> I will look into stored procedures for not having to send the bulky {0: {0: Infinity, ...}, ...} structures for every pre-allocation
[11:43:29] <bogn1> I have the upsert parameter set as well, to avoid "exists?"
[11:43:41] <nixstt> it’s hard for me to preallocate all the data, i’m saving for example the disk IO statistics you saw in that image, sometimes a disk might be added/removed and the whole structure changes
[11:44:09] <bogn1> that's why I have a document per metric
[11:44:56] <bogn1> which also has it's implications I'd say
[11:45:53] <nixstt> Yeah I like to just grab a document and have all the metrics in there instead of grabbing everything from different collections
[11:46:26] <nixstt> my application uses quite a bit of memory cause of that but it’s a dashboard type application so it’s not heavily used all the time
[11:47:00] <bogn1> wiredtiger might be interesting for you, it is for us as well, as that doesn't need to pre-allocate
[11:47:17] <nixstt> this is wiredtiger already using it
[11:48:18] <bogn1> but we delayed the move to the new update mechanics
[11:49:28] <bogn1> my not very informed impression is, that you should no longer $set, but instead $push
[11:49:34] <Naeblis> What should be the design for a field which can take multiple types of responses? I have a "discussion" model and a "response" model. Response is currently text only, but I'd like it to also include things like polls etc.
[11:55:24] <bogn1> have you seen my direct message?
[11:56:01] <bogn1> and also this seems to be a mixture of the multiple metrics schema with the indexed time-slots: https://www.mongodb.com/presentations/webinar-internet-things-bosch-concept-code
[11:58:33] <bogn1> it's memory requirements might be lighter
[11:59:09] <bogn1> but I think the indexed time-slots approach doesn't mix well with WiredTiger but that's just an initial impression
[12:01:07] <bogn1> A question to the broader audience. Does anybody know, whether it's possible to see a graph of record moves (due to document growing) in MMS?
[12:02:37] <nixstt> Not sure haven’t use MMS in a while
[12:02:57] <bogn1> I'm currently tracking it with a cron-job
[12:03:20] <bogn1> and sending the value picked out of serverStatus' json into grafana
[14:12:41] <abyss> I'm reading about sharding and replication. For my requirements sharding is too much so I chose replica. I'm reading about replica: http://docs.mongodb.org/master/MongoDB-replication-guide.pdf and I have question, because that doc mentions about heartbeat.
[14:14:10] <abyss> if three nodes working: primary -> secondary -> secondary, then everything is ok, because I can add heartbeat between secondary nodes: secondary <-hearbeat-> secondary and put there reads... But when primary fall down then one of the secondary become primary... I'm right?
[14:15:09] <abyss> so... What happens with writes? How I can handle that I mean avoid doing writes to secondary node... I should handle this in heartbeat, or how?
[14:16:10] <lietu> I've been reading a bit on some practical aspects of mongodb replication .. apparently regardless of what we set write concern etc. values to, there is absolutely no way to say what data is in the database if you're running a replicaset? if you write with a write concern of e.g. 3 with 4 nodes out of which 2 are down .. it will write to the 2 that are up, and hang indefinitely until more are coming up .. if you set wtimeout it won't "hang"
[14:16:11] <lietu> but you have no idea if something was written or if it will eventually be written to the DB .. so is there a way to rollback the failing write, or kill the query and write the old data back, or how do people deal with this in practice?
[14:16:49] <lietu> and the indefinite hanging is of course not a very practical solution for .. well .. really any purpose
[14:35:59] <unseensoul> How do I access a particular field from a document returned by findOne?
[20:10:37] <Nepoxx> I'Ve had my fair share of issues with Mongoose, I'll tell you that
[20:11:24] <doc_tuna> question why you are using an ODM versus just working with the driver
[20:11:28] <Nepoxx> I kinda like having a schema safety net
[20:11:56] <Nepoxx> Also, I'm coming from Java-Hibernate-Spring world, so I've come a long way :P
[20:12:01] <doc_tuna> i dont think apps generally need that much abstraction
[20:12:46] <StephenLynx> schema safety is useless.
[20:12:59] <StephenLynx> any error related to that is easily caught.
[20:13:35] <Nepoxx> removing Mongoose from my current project is going to be a PITA, however I'll try without on my next one. Or I might try Monk, that looks promising too
[23:06:11] <StephenLynx> "Warning: The TTL index does not guarantee that expired data will be deleted immediately. There may be a delay between the time a document expires and the time that MongoDB removes the document from the database."
[23:13:19] <GothAlice> StephenLynx: Known thing, as you noticed. My use of MongoDB as a cache takes that into account when accessing cached values.
[23:18:13] <GothAlice> I prefer throwing away invalid responses, rather than having MongoDB do the extra document/index scanning, as I'm looking up a unique key (I'm using _id, in fact), so it's less computational power needed overall, esp. if the delete is a fire-and-forget, which it is now.
[23:19:40] <GothAlice> (If the TTL index hasn't gotten to it, in the off chance that I hit the minute window for the entry *I'll* delete it.)