[00:01:06] <leandroa> @cheeser thank you! (sorry for the delay, busy deployment day!)
[00:03:11] <GothAlice> leandroa: I know the feels. ¬_¬
[00:12:58] <astromaddie> I'm new to learning mongo. can someone help me with a problem I'm having?
[00:15:08] <astromaddie> I'm not even sure if what I'm trying to do is possible, but I'm trying to sum up a few million arrays (to get an average). the arrays all have the same dimensions.
[00:17:06] <astromaddie> I'm also using mongo by proxy of pymongo. I tried doing nested python loops first but after 13 hours, it wasn't finished -- that's a no-go
[00:30:23] <astromaddie> anyone here not idling? heh
[08:00:32] <m3t4lukas> https://github.com/mongodb/mongo-c-driver/pull/222/files?diff=split#diff-a8dc54c42bee460afe8f133af3220069L771 for anyone who also whishes to do that
[09:32:38] <mtree> why count takes so much longer than same query with sort, skip and limit?
[10:26:05] <PasWhoop> Hello there! anyone knows what the output on the mongod terminal window means?
[10:52:01] <mark_at_minds> hello! how well does mongodb support frequents deletes/updates, partically on indexes columns?
[10:54:04] <gemtastic> Hmm anyone knows why I get "command not found" in the console when I'm running mongod in one terminal and in the bin directory try to run mongo in OSX?
[12:19:37] <tibyke> any idea on getting the unique _keys_ of a nested 'array' in a collection? can it be a oneliner or is it possible only with a loop?
[12:20:23] <StephenLynx> what do you mean by 'array'? why the quotes?
[12:21:33] <tibyke> i have something like { whatever: { foo: 1, bar: 2}}, then { whatever: { foo1: 3, bar1: 53}}, and {whatever: {foo2: 234, bar2: 555}}, and i need foo, foo1, foo2, bar, bar1, bar2
[12:21:59] <tibyke> can it be achieved with an aggregate?
[12:22:52] <StephenLynx> and not just [{foo:1, bar:2}] ?
[12:23:31] <tibyke> thats a good question, i'll convert it to that schema, but im curious if I can get it out from this schema.
[12:23:43] <tibyke> im pretty new to mongo actually :)
[12:24:19] <StephenLynx> all those objects reside in an array of a single document?
[12:25:51] <tibyke> no, they are in several documents, all nested in 'whatever', and i need the distinct collection-wide keys of 'whatever' in the above example: foo, foo1, foo2, bar, bar1, bar2
[12:26:04] <tibyke> every document is nested in 'whatever'
[12:34:50] <gemtastic> So, in other words; let's say I have a node.js application interacting with my mongodb and it wants to put stuff into a collection that doesn't exist, will it create it or will it throw an error?
[12:41:43] <gemtastic> If you are running any edition of Windows Server 2008 R2 or Windows 7, please install a hotfix to resolve an issue with memory mapped files on Windows.
[13:47:28] <tibyke> StephenLynx, and how would you get every single field/key from every document without the nesting? so eg. {foo: 1, bar: 2}, {foo1: 0, bar2: 434}.
[13:48:40] <gemtastic> I only really started playing around with it today
[13:48:42] <StephenLynx> that creates a new document with the fields directly in the document, no nesting.
[13:49:02] <StephenLynx> so don't bother with windows. you don't run a windows server.
[13:51:24] <gemtastic> StephenLynx: I really don't think it matters when you're jsut n00bing around.
[13:51:58] <gemtastic> But don't worry, my work environments are in Unix
[13:52:01] <danijoo> I want to move a huge database (400gb) to another server while in production (cant take it down). what do you think is the best way to do so. db.copyDatabase() ?
[13:52:43] <StephenLynx> I have never did anything like that, but putting the new server as a replica and waiting for it sync is a valid option?
[13:52:56] <vagelis> Hello, I have to make a query in MongoDB and then return the results in an aggregation style :| I am not good with Mongo and I dont know if its possible to combine aggregation with query. Does $match has to do with that? :S
[13:53:01] <danijoo> i had that idea too. not sure whats better though
[13:53:14] <StephenLynx> vagelis, yes. use $match to filter documents.
[13:53:28] <tibyke> StephenLynx, i dont nest them :)
[13:53:34] <danijoo> data loss is not a problem. its more of a giant cache so if entries get lost thats ok
[13:53:36] <vagelis> Cool. Well i tried and i think i dont do something corectly
[13:53:53] <tibyke> StephenLynx, thats right, and now i need ['foo', 'bar']. how?
[13:53:58] <danijoo> also both servers are in the same data center so data transfer rate is also high
[13:54:07] <StephenLynx> why you need them in an array instead of an object?
[13:55:01] <tibyke> StephenLynx, its something like a counter for every field, and i need to loop thru every "field" (counter) to get the top5 of them.
[13:55:26] <StephenLynx> you need to sort it according to one of the fields?
[13:55:39] <vagelis> StephenLynx I want it to be like this: { 'Separated_by_a_Field': '$Field', 'My_Results': [... , ... , ...] }
[13:55:42] <tibyke> eg {apples: 5, tables: 8}, {whatever: 15, baskets: 885}. and i need every TOP5 of every article.
[13:55:53] <tibyke> StephenLynx, EVERY field, separately
[13:57:56] <gemtastic> Would you say that ElasticSearch has a similar way of storing the data as MongoDB? Feels like everything's called pretty much the same
[13:58:42] <whaley> gemtastic: no, because elasticsearch stores data in lucene index files... I'm reasonably certain mongos underlying storage engine is not lucene
[14:01:08] <gemtastic> whaley: well, it may not code-wise work in the same way, but it looks like they're trying to follow the same logical pattern on the interface end
[14:01:46] <gemtastic> I mean; ElasticSearch IS a java application after all, I'm guessing Mongo is far from it?
[14:03:54] <whaley> gemtastic: what logical pattern are you referring to btw? (I think this is largely irrelevant, but I'm in the mood for conversing)
[14:05:54] <gemtastic> I was thinking the whole Collections>index>document
[14:07:18] <gemtastic> I've been working kinda much in ElasticSearch (still a n00b there too though, but I've worked a bit with it and gotten the hang of it) and I can't help but to notice that they seem to try to follow the same noSQL pattern, but maybe that just IS the noSQL pattern..
[14:07:32] <gemtastic> I haven't worked with NoSQL apart from what ES does before.
[14:07:43] <mike_edmr> there is no single way to do nosql
[14:08:14] <gemtastic> That's what I thought, but ES seems to be following the same logical pattern for the user and references as Mongo
[14:08:32] <whaley> both mongo and elastic store documents often represented to clients as some json-like structure, but that's where I think the similarities end
[14:08:33] <gemtastic> And I don't know much about NoSQL apart from that it's anything that isn't SQL :P
[14:08:39] <mike_edmr> yeah they have been used in a complimentary fashion a lot
[14:08:49] <mike_edmr> you can hook them up with mongo connector or whathaveyou
[14:09:15] <mike_edmr> right, the internals are substantially different
[14:10:13] <gemtastic> Yeah, the internals are nothing alike, but there's benefits to making the UX (can you call it that when we're talking about developers?) consistent is beneficial since it decreases the learning curve
[14:10:24] <vagelis> StephenLynx if I use $match only, returns the results that I want but not grouped. When i insert $group.. it returns only the groups. So I guess I have to make a new field and put inside the results but how to define the results $what?
[15:04:37] <leev> Derick: don't suppose you know how to fix "Primary for rs1/mongodb02:10001,mongodb03:10002 was down before, bypassing setShardVersion. The local replica set view and targeting may be stale."?
[17:10:28] <und1sk0> i think we tried yesterday but weren't able to, but we crashed the db last month (using an older rev of mongodb) under similar circumstances...background reindexing
[17:10:58] <und1sk0> 2.6.7 for the former, 2.6.10 for the latter
[17:11:37] <und1sk0> here's the pre-segfault error: Invalid access at address: 0x38#012
[17:13:37] <und1sk0> the secondary also had a fatal assertion after i recovered the downed primary
[18:00:07] <greyTEO> if a failure occurs in a bulk operation does the entire bulk fail?
[18:00:37] <greyTEO> such as a unique index exception, would all other 999 operations fail..
[18:04:25] <StephenLynx> <und1sk0> Maybe it was fixed in a newer version.
[18:04:36] <StephenLynx> its been a while since 2.6
[18:06:36] <cheeser> greyTEO: i think you send an option with the right to determine whether to fail or not
[18:09:26] <greyTEO> I found this deep in the docs: http://docs.mongodb.org/manual/reference/method/db.collection.initializeUnorderedBulkOp/#error-handling
[18:09:45] <greyTEO> I guess it depends on the drier cheeser ...
[18:25:53] <chairmanmow> I am trying to perform an update operation on a record where the values of the subdocument array will be replaced with the array contents of the incoming query. Not getting an error, figured I'd paste a link to a bit of code here in case anyone might see where I might be missing something http://pastebin.com/r0Bdyaev
[18:31:49] <chairmanmow> interesting, I'm junior so I don't make these kinda calls, any reading material you might know of on alternatives/comparisons/etc to mongoose?
[18:33:34] <akoustik> chairmanmow: mongoskin is a possible alternative, if you don't actually need the DBMS sorta stuff.
[18:33:41] <StephenLynx> I got a person that benchmarked and got mongoose running 600% slower.
[18:36:19] <akoustik> hahahaa the man has a point.
[18:36:40] <chairmanmow> Yeah, I'll have to look into the criticisms and alternatives. I'll research it a bit and see if I can bring anything to my boss's attention when he's back in the office
[18:37:25] <StephenLynx> the best alternative is the regular driver, IMO.
[18:37:31] <StephenLynx> official support, great documentation, performance.
[18:42:12] <akoustik> anyone here get mongo 3.0 running on centos 6.6? the repo on the mongo site has a libstdc++ dependency that apparently is not provided in c6.6
[18:42:38] <akoustik> i have a feeling this is going to be frustrating
[18:43:05] <chairmanmow> Ahh, that looks like it might be useful, thanks StephenLynx for the tip. Now I need to see what I can do to keep myself busy today if it is indeed a mongoose issue
[18:43:36] <StephenLynx> do w/e you have to do, but with the regular driver :v
[18:44:00] <akoustik> StephenLynx: sorry - do you mean that's what you've run it on, or that's all it *can* run on?
[18:47:47] <StephenLynx> it just had an issue with cross-compiling for a while, if I am not mistaken
[18:48:00] <StephenLynx> so it is missing on some versions of the packaged binaries.
[18:48:20] <StephenLynx> nothing that keeps one from building these older versions with SSL
[18:49:15] <akoustik> ah, maybe that's the way to go. now... i don't wanna push it, but... is there any chance i can expect to get a replica set running with members running different versions of mongo?
[18:49:47] <akoustik> i guess i can just start googling this crap. haha
[19:00:52] <StephenLynx> probably not, but I have never ran replicas.
[19:11:28] <akoustik> yeah i see no evidence that it's possible.
[19:22:26] <jfhbrook> What kind of things can cause mongo to peg the cpu? Mine's at 600% and getting <10 writes per second
[19:55:41] <jfhbrook> which makes sense given that my writes are limited by how many I can push through---I'm hitting an API and have like 8 in-flight requests at any given time, so low write perf means low throughput
[19:55:47] <morenoh149> anyone working with meteorjs?
[19:56:35] <jfhbrook> anyways, nothing in here screams major screwup
[19:57:50] <jfhbrook> I had a few page fault spikes, biggest one at 0.23 (units?) but those are spikes and don't particularly correlate with overall trends
[20:08:58] <morenoh149> I wanted to chat from a mongo pov
[20:09:15] <morenoh149> since mongo is such a critical part of meteor
[20:22:34] <deathanchor> what's the fast way to get the time out of an ObjectId?
[20:23:15] <jfhbrook> okay, so what I know is that my cpu is pegged, my write throughput is outrageously low, mms doesn't think anything is alarming and the disk looks healthy
[20:51:56] <akoustik> jfhbrook: maybe you already mentioned - are you seeing similar behavior with other collections? did you say that writes used to be fast with this collection, then you started seeing this?
[20:53:46] <jfhbrook> no, no similar behavior with other collections afaik but this is also the only one I'm writing to
[20:53:52] <jfhbrook> when I first started to write to it things were fine
[20:54:05] <jfhbrook> then I got an exponential decay in averaged writes throughput
[20:54:27] <jfhbrook> periods of 100-200 writes per second, then periods of high cpu usage and <10 writes per second
[20:54:41] <jfhbrook> and the latter regime became more and more common until it became entirely dominant
[20:54:51] <akoustik> yeah... so, are there a number of indexes on the collection?
[20:55:21] <akoustik> i'm sure you've seen this, but just in case: http://docs.mongodb.org/manual/core/write-performance/
[20:56:06] <akoustik> the burst behavior you're talking about is pretty weird though, obviously
[20:57:18] <akoustik> you might not be able to, but i'd be curious to see what happens with other interactions. say you remove some or all of the docs in that collection, how does that perform? does it affect performance of subsequent writes?
[21:01:37] <jfhbrook> I have some indices but not an outrageous amount---I believe I index id/revision and maybe mtime
[21:02:43] <akoustik> yeah i wouldn't expect that to be the problem then...
[21:06:45] <akoustik> i wish i had the experience to give better suggestions, but honestly it sounds like you've gone through a pretty good number of tests. at this point, i would just be banging on it until i notice a change of behavior. drop the whole collection, insert a whole bunch of records again, update them, whatever. maybe clone the DB onto a VM or some other host, try it there.
[21:34:21] <greyTEO> has anyone had any issues with the mongo-connector being slow?
[21:54:39] <greyTEO> I think the oplog capped colelction started to pop operations off before mong-connector could process them.
[22:11:04] <lxsameer> guys, which one has better performance generally, embeds documents vs references
[22:36:00] <greyTEO> lxsameer, it depends on what type of performance you are looking for. Embeds usually tend to be more write heavy as it will produce duplicate data. References are read heavy requiring more lookups to produce the end result
[22:36:44] <greyTEO> referneces are necessary supported by mongo, that all depends on , im guessing odm, for that performance
[22:57:44] <greyTEO> im out. everyone enjoy the weekend.
[22:57:46] <Boomtime> right, doing more work to structure your documents in a way that makes reading eacy is a good idea because you almost always read more often than you write
[23:45:08] <jfhbrook> so for those of you following at home, I'm an idiot
[23:45:13] <jfhbrook> the reason my perf was so shitty
[23:45:29] <jfhbrook> was due to some idiosyncrasies in how we do search indexing, each write requires at least one get
[23:45:52] <jfhbrook> and due to me not thinking, when I wanted to reset the database so I could try to run my migration script from scratch, I did db.dropDatabase()
[23:45:58] <jfhbrook> guess what happened to my indexes?
[23:46:09] <jfhbrook> the service ensures indexes but only on start
[23:46:17] <jfhbrook> so I kicked one of the nodes and *bam*
[23:46:26] <jfhbrook> it only took me ALL DAY to figure that out