PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Sunday the 10th of May, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:00:21] <StephenLynx> hm, couldn't do it. Had to unlink the file for it to update its upload date and be able to compare.
[02:13:53] <jr3> has anyone had any issues with .catching errors from mongoose.save
[02:15:55] <StephenLynx> I just use the driver, never had issues getting the error.
[04:24:39] <shlant> anyone have a working IAM policy for MMS? the one in the docs gives me: An error occurred: Policy document must be version 2012-10-17 or greater.
[04:30:13] <krisfremen> shlant: add "Version": "2012-10-17" to your policy
[04:30:44] <shlant> krisfremen: simple enough, thanks
[04:30:58] <krisfremen> I had issues with that one too when I was setting it up
[04:31:17] <krisfremen> dunno why it's not in the sample policy
[12:31:29] <vel> ppl, can I completely remove sharding infrastructure? I have no sharded collection but have sharding enabled in one of database, mongos and config servers. can I simply start primary shard as common replicaset and write to it directly?
[13:00:23] <vel> short question: is any difference between standalone replicaset and one shard of replicaset? can I simply run the shard as replicaset without mongos?
[13:56:44] <fxmulder> well cloning has made it to 503361151 objects, that's the furthest its gotten so far
[14:57:17] <Vitium> StephenLynx, Don't you think it's a waste of time to setup an environment and build your own framework such as express?
[14:57:55] <Vitium> As opposed to adopting something that already exists.
[14:58:44] <StephenLynx> if you want to have quality code, no.
[14:58:49] <StephenLynx> or you want to have performance.
[14:58:57] <StephenLynx> one size fits no one.
[14:59:13] <StephenLynx> and besides, preparing the basic foundation that will work as a framework is not that hard.
[14:59:26] <StephenLynx> it only looks hard for people who never tried it.
[15:01:15] <StephenLynx> but again, most programmers working with web wouldn't be able to write good code if their lives depended on it, so really might be for the best for these people to use these frameworks.
[15:01:30] <Vitium> StephenLynx, That's what I was thinking.
[15:01:41] <Vitium> I'd probably end up with terrible code quality if I was to do that.
[15:01:59] <StephenLynx> but then you are admitting defeat.
[15:02:21] <StephenLynx> you might as well get in a grave to spare some time.
[15:03:48] <StephenLynx> at least move out the industry so people won't have to be burdened by your defeatism.
[15:04:06] <Vitium> I'm not in any industry.
[15:04:17] <StephenLynx> don't you work in IT?
[15:04:20] <Vitium> I'm a student.
[15:04:37] <StephenLynx> don't you plan to work in IT?
[15:04:54] <Vitium> Yeah.
[15:05:22] <StephenLynx> so you don't worry about writing good or bad code.
[15:05:30] <StephenLynx> you worry about pushing your limits every
[15:05:31] <StephenLynx> single
[15:05:31] <StephenLynx> day
[15:05:38] <StephenLynx> getting better little by little.
[15:05:53] <StephenLynx> you don't throw the towel before the fight begins.
[15:06:06] <StephenLynx> you think I didn't wrote shameful bad code when I had your age?
[15:07:17] <StephenLynx> the sooner you throw out the training wheels, the sooner you start actually learning.
[15:08:29] <Vitium> Thanks, StephenLynx.
[16:16:35] <gothos> Hello! Is it possible to aggregate over all existing members in an object without knowing their names?
[16:17:41] <StephenLynx> by name you mean _id?
[16:18:04] <StephenLynx> and yes, you can handle documents without an _id just fine. you just need one when grouping, afaik.
[16:26:25] <WojciechO> Hello! I have a problem with $geoNear query operator and 2dsphere index. I have 6k docs with locations and each query using geoNear returns max of around 550 documents. Is there a limit to the geoNear query that can be index or something?
[16:33:47] <gothos> StephenLynx: no, I have { _id: 0, perfData: { x: 1, y: 2, z: 3 } } and would like to aggregate over x,y,z without knowing their names
[16:34:41] <StephenLynx> their names?
[16:34:48] <StephenLynx> the fields the values belong to?
[16:35:16] <StephenLynx> you want to get 1,2,3 without referring to xyz?
[17:56:36] <ignasr> hi, my "Data Size" is 330MB and database "Size" is 5GB. Why the difference? Is this bad? I am debugging disk IO spikes...
[17:58:13] <StephenLynx> mongo pre-allocates space.
[18:01:37] <preaction> then if a document grows outside its pre-allocated space, mongo has to move it to a new space. if no existing space is big enough, it needs to allocate new disk space. growing documents in-place is a bad idea for disk space
[18:04:12] <gothos> StephenLynx: yes, I actually want to aggregate over x,y,z and return the output of the aggregation function as x,y,z
[18:04:42] <gothos> since I do not know all the x,y,z beforehand
[18:04:52] <StephenLynx> isn't it named x y z?
[18:05:32] <gothos> nope, those are examples, I know there are some members in there but do not know their name
[18:05:42] <StephenLynx> I suggest using an array of objects then.
[18:05:53] <StephenLynx> each object would have a label and a value field.
[18:06:38] <gothos> like perfData: [ {x: 1}, {y: 2} ] ?
[18:06:47] <StephenLynx> no
[18:06:58] <StephenLynx> [{label:'x', value :1}]
[18:07:06] <gothos> ah!
[18:07:19] <StephenLynx> that way you will always know where to find both the value and the name of the field.
[18:07:25] <gothos> yes, that sounds reasonable
[18:07:45] <StephenLynx> but keep in mind this kind of approach has some drawbacks.
[18:09:08] <StephenLynx> I probably would refactor so I could just keep the values in the top level of the array
[18:09:18] <StephenLynx> but then it might have other issues.
[18:09:32] <gothos> well, to state the bigger picture. I'm currently trying to save nagios check data in mongo, at the moment it looks like this: { check: "name", host: "foo", day: 1, hour: 2, perfDatat: [ {min: 5, check1: 5, check2: 1}, ... ] }
[18:09:55] <StephenLynx> what does perfdata means?
[18:10:00] <gothos> I put the actual data in an array since I read that updates are a lot cheaper
[18:10:20] <gothos> performance data returned by nagios checks, ie percentage of used memory
[18:10:36] <StephenLynx> hm
[18:10:49] <StephenLynx> keep in mind documents have a 16mb limit
[18:11:04] <StephenLynx> if you have to keep track of gigantic amounts of data in perfdata.
[18:11:11] <gothos> ie. { min: 1, used: 60, cache: 20 }
[18:11:34] <StephenLynx> yeah, but how many are you expecting per document?
[18:11:44] <gothos> that shouldn't be a problem, there will only be around 12 objects in the array
[18:11:50] <StephenLynx> will you have to query only a few of entries in perf data?
[18:11:55] <StephenLynx> ah
[18:12:05] <StephenLynx> well, then using a sub array doesn't sound like a problem.
[18:12:13] <StephenLynx> yeah, it will work just fine.
[18:12:15] <gothos> since checks are done around every 5 minutes, and one document per (hostname, check, hour)
[18:12:19] <gothos> okay :)
[18:13:16] <gothos> I'll try to refactor with your suggestions thx :)
[18:20:53] <fxmulder> grr, it just started over again
[18:21:02] <fxmulder> [rsHealthPoll] replset info maildump1:27018 thinks that we are down
[18:21:18] <fxmulder> this is absolutely horrible