[00:32:49] <hotch> hey everyone, there still isn't a single query method to do an update of an array .. within an object, within … yup … any array. i.e. var query = { x.y.z.array1.array2.email : 'replace@me.com' }
[00:33:56] <hotch> Based on docs obviously and how I've never nested so deep for items that needed to ever be … edited I think that this is not possible. Current methodology is sucking and the company has their db structured like this all over. I hate writing server side code that is ~20 lines or so to just do a maxs update
[00:34:24] <_m> At present, I don't think this is possible
[02:21:12] <svm_invictvs> Using the mongo db jackson mappers, I'm having some issues. I was curious why they only allow you to update using an object and not a query.
[07:14:51] <boll> On mms there is still a "global lock percentage" when running 2.2.0. How does that make sense?
[07:36:27] <oskie> helo, I had to do complete replica set resync because of some node turning to FATAL, and it's taking ages. It seems especially index rebuilds are taking long time. Is there anything you can do to speed up this?
[07:38:28] <oskie> e.g.: Thu Sep 27 06:33:28 [rsSync] build index done 87188186 records 16831.4 secs
[08:28:55] <unknet> I have a user model which embeds a box. I have enabled autosave for 'has_many things' in box model but mongoid doesn't save anything, any ideas?
[10:09:58] <neekl> hello i would like query articles with comment.tag = "test" from that simple article - comment structure
[11:11:31] <oskie> I'm getting a lot of Thu Sep 27 11:09:17 [rsSync] Assertion: 10348:$extra: ns name too long ... what really is the limit if you want to do map-reduce ops etc?
[15:18:16] <remonvv> Gargoyle, should be. There's a bit more work to be done but it should be neglibible in the grand scheme of things. I've run tests myself and didn't notice a consistent performance difference.
[15:22:54] <Guest____> Okay, there seems to be something I'm not quite understanding. When I start mongod, with "./mongod --dbpath /test/data/db", a completely new folder, and then run "./mongo" in another shell, it still shows all my old databases. Am I doing something wrong? Are databases not stored under data/db? Is it pulling from somewhere else?
[15:25:01] <remonvv> It's pulled from where your dbpath points to so the above should result in a fresh environment if it's true that that directory is empty.
[15:25:07] <remonvv> Which it isn't if it finds data ;)
[15:25:31] <remonvv> either that --dbpath is pointing to a directory with data or your mongo isn't connecting to that mongod
[15:28:08] <Guest____> Okay, it's probably the last one. mongo isn't connecting to the open mongod I've got open
[15:28:37] <Guest____> cause I've created so many new folders, and see the new "journal" folder and "mongod.lock" being created, but it still brings up old data
[15:28:49] <Guest____> How would I test or troubleshoot how mongo is connecting?
[15:39:30] <Guest____> That's dumb. Homebrew should give an error.
[15:39:33] <remonvv> Good. This is only a little better than asking a question that can be answered by the first Google hit after literally pasting the question into Google.
[16:37:34] <awc737> Hi, with the following use case, could you please advise if Mongo sounds right for me, or sql sounds better?
[16:38:43] <awc737> I have categories, products, and attributes. I can create attributes globally for a category, and they will cascade to each product in that category. I can then de-select specific products.
[16:39:06] <awc737> I should also be able to create attributes at the product level
[16:39:41] <awc737> so basically, there is a many to many relationship, many products can have many attributes. but I do not want to redefine those attributes for each product. I want to reference them
[16:39:58] <NodeX> mongo is probably not right for you then
[16:40:06] <NodeX> that sounds like a heavy Join operation
[16:41:20] <awc737> i really wanted the speed and convenience of document store, and fairly simple integration between mongodb and elasticsearch
[16:42:44] <awc737> say I want to create a Red attribute globally. and product A, B, and C have that attribute. There is no way to do that with Mongo or nosql?
[16:42:56] <remonvv> There is, but it isn't an optimal use case
[16:43:10] <NodeX> unless "Red" doesnt change often
[16:43:32] <NodeX> in which case you can embedd "Red" and no join needed
[16:43:40] <awc737> i feel like optimal use case for my application should still be document store however... in the end
[16:44:01] <remonvv> It will if your attributes never change for the products but that's the opposite of what you said originally ;)
[16:44:22] <awc737> no, I just said it may need to not be referenced for certain products
[16:44:31] <awc737> so red is still red, but I may decide product C does not come in red
[16:44:34] <remonvv> If you define attributes A for category C and all you do is add A to new products that belong to C but never change them afterwards you can denormalize that bit and embed.
[16:49:23] <remonvv> you get flexibility at the expense of in this case diskspace
[16:50:27] <awc737> say I want to edit product A + B Red, but not C Red, from one locatio
[16:50:29] <remonvv> there's no global schema so each document contains its own "schema"
[16:50:46] <awc737> I will be using embed to insert the new Red into both locations?
[16:51:36] <remonvv> Well, an update, but yes. If you want to make two products red you'd do something like update({_id:{$in:[PROD1, PROD2]}}, {$set:{color:"Red"}}, false, true)
[16:51:55] <remonvv> that would set that attribute to red, or create the field if it doesn't exist already.
[16:52:16] <remonvv> Read up on it, it's pretty straightforward
[17:58:44] <ezakimak> how do i query that two items exist in a single array? doc: { arr: [ "a", "b", "c" ] } query: arr: { $elem_match: [ "a", "b"] } ?
[18:14:25] <burley_sf> I have a query where on Mongo 2.0.4 (10gen RPM; 64bit) the slow query log data doesn't match what I get out of running the query with .explain()
[18:15:14] <burley_sf> I read through the change logs and release notes between the most current and 2.0.4 and not seeing anything that makes me think an upgrade would fix it
[18:15:40] <burley_sf> any ideas on why the slow query log would show essentially a full table scan, while .explain() would show the index being used
[18:18:05] <quaa> Can anyone help me figure out why my mongo server (2.2.0) was killed last night? /var/log/messages show Out of memory: Kill process 1315 (mongod) score 210 or sacrifice child; Killed process 1315 (mongod) total-vm:84459796kB, anon-rss:0kB, file-rss:0kB; /log/mongod.log shows many connections open (3000+) which seems high but it doesn't show anything that looks like it crashed.
[18:18:34] <burley_sf> quaa: OOM killer killed it from what you pasted
[18:22:41] <burley_sf> I don't have enough data to state that definitively, but its a guess
[18:22:59] <quaa> Probably my first order of business it to make sure that all my jobs are closing their connections, and maybe limit mongo to a smaller number of connections
[18:23:02] <kali> you need to setup something to monitor your server vitals
[18:26:11] <ezakimak> what's more painful: a) writing a few client-side joins (also handling skip/limit), or b) writing lots of queries that dig into subobjects ?
[18:26:12] <kali> burley_sf: does mms monitors server metrics ?
[19:25:26] <Pio> my winner-mode is being weird lately, it works fine at first but after running emacs for a while it starts becoming inaccurate
[19:28:28] <Pio> like I'll winner-undo and then winner-redo and i'll not end up where i started
[20:25:59] <Almindor> is it possible to use $gt on ObjectIds?
[20:26:22] <Almindor> I have a huge collection (without indexes so far) and I want to dump a small part of it for dev server usage (has much less RAM for indexing etc.)
[20:53:30] <Almindor> if I create a an empty collection and do ensureIndex on it, and then do a cloneCollection (without indexes, because the source has none) will the indexes be created as data is imported?
[20:55:59] <gigo1980> hi all, i have problem in my mongo cluster… my system swap and i get io problems and the mongo shard seems to sleep. load goes till 80 !!!!
[21:13:57] <stefan41> if i have a collection with documents like { name: 'foo', number:7, time: timestamp }, how do i find all of the documents with the most recent timestamp, one per name? (if that makes sense)
[21:14:08] <stefan41> I think that i can do something with group and limit, but not 100% sure?
[21:18:34] <stefan41> or, on a more basic level, why would a collection not have an 'aggregate' method?
[21:22:15] <Vile> stefan41: why would it not? probably old version of mongo?
[21:22:37] <stefan41> Vile: yeah, i just saw the little "added in 2.1.0" at the top of the doc page. feel stupid :-)
[21:23:14] <Vile> actually it makes a lot of sense to upgrade to 2.2
[21:24:27] <stefan41> why was it so fast between 2.1 and 2.2? just new features being added?
[21:24:42] <stefan41> (i.e. no big bugs or security issues?)
[21:25:02] <Vile> stefan41: odd versions are development branches, even - releases
[21:25:34] <Vile> i.e. stable versions go 1.6 => 1.8 => 2.0 => 2.2
[21:32:12] <stefan41> Vile: is there a way to sort rows before they get reduced by the aggregator?
[21:32:58] <stefan41> i.e. rows that match a condition, but find the one with the highest timestamp?
[21:37:22] <stefan41> um. is this; http://www.mongodb.org/display/DOCS/Aggregation obsoleted by this:http://docs.mongodb.org/manual/reference/aggregation/
[21:45:03] <Almindor> does anyone know why mongorestore would be like 200kb a second speed? I have an existing huge collection and I'm importing 500mb worth of new data into it
[21:45:20] <Almindor> I thought mongorestore doesn't check for duplicates or existing things
[21:45:25] <Almindor> there are no indexes on the collections
[21:49:22] <Almindor> there are no shartds, but there are 2 secondaries as replicaset
[21:52:01] <crabdude> howdy howdy howdy, I want to sort the documents of a collection ordered by the value of a timestamp in another collection. Example: Users view blog posts and the date the posts they've viewed and the timestamp are stored in user, and I want to get all the posts they've viewed ordered by most recent. What's the best way to do that?
[21:57:46] <ezakimak> i'm guessing either a server-side function (not great), possibly a mapreduce, or client side...
[21:58:44] <ezakimak> opine: would it be worth it to normalize password into it's own collection out of the user collection, to avoid having to do { "password": false } nearly everywhere?
[22:00:52] <ezakimak> crabdude, don't you already have it in the user document exactly as you need?
[22:01:35] <ezakimak> query the list of ids, then query just those posts, then order them client side
[22:02:31] <ezakimak> may not need to order them client side if you stuff them into a map and iterate the original id list
[22:02:58] <crabdude> @ezakimak So right now I get posts in order from user, but user.posts don't contain the full post object, so I need to get that from the posts collection, so I do a find, but now (the purpose of the question) I have to do a very awkward manual join between the user.posts and the posts (O(n^2))
[22:03:30] <ezakimak> why are you joining? you can find just the ids you need from the list in the user document
[22:03:59] <crabdude> I find in posts collection, but now they're not ordered
[22:04:13] <crabdude> in doesn't preserve ordering(?)
[22:04:22] <ezakimak> right, the result from the find isn't ordered
[22:04:33] <ezakimak> but you have the ordered list you started with on the client side
[22:05:18] <crabdude> @ezakimak yup, so I loop through it, and in each loop iteration, I have to do a find in array for it's associated post (manual join / O(n^2)) right?
[22:05:30] <ezakimak> iterate the find results once to stuff them into a map (eg: posts[post["_id"]] = post) then iterate the original, ordered list, and look up the post from the map
[22:05:58] <ezakimak> depends how the map/dict lookup works
[22:08:06] <crabdude> ya…. I've been doing that, it's just painful since obviously it's the sort of thing that is trivial in SQL, so I was hoping maybe there was a map/reduce or aggregation solution that I didn't know about
[22:08:34] <ezakimak> well, i think, conceptually, that's the work that has to be done--no matter where/who does it
[22:10:32] <crabdude> but it could/would be abstracted away
[22:11:04] <crabdude> just hoping maybe there was a map/reduce or aggregation way of doing it (since obviously I'm new to mongo)
[22:11:27] <ezakimak> mongomapper might do some of that, or mongoalchemy
[22:13:50] <crabdude> @ezakimak thanks a lot for the sanity check. I'm thinking maybe the solution is to just store the timestamp in posts.post.user.lastViewed_timestamp
[22:14:00] <crabdude> so these sorts of joins wouldn't be necessary =)
[22:14:10] <ezakimak> i think that may be the mongo-way
[22:14:39] <ezakimak> is there any way to override mongo's choice of _id for the id field, and rename it to simply "id" ?