PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 18th of September, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:29:16] <_m> tomlikestorock: Switch to ruby. ;)
[00:29:33] <tomlikestorock> works says "no" :(
[00:30:29] <_m> Second google result says, new MongoDate(gmmktime());
[00:31:28] <_m> Also see: http://stackoverflow.com/questions/7021678/how-can-i-handle-datetime-in-mongodb-with-php
[00:32:19] <_m> Think the key is that PHP's DateTime doesn't know anything about Timezone information.
[02:51:55] <wachpwnski> Is there a way I can have mongo drop elements from an array after a limit?
[02:59:05] <wachpwnski> Here is my schema, I am wondering if I can limit the size of the logins array. https://gist.github.com/b383ebc37311a93d4e23
[03:43:37] <jgornick> Hey guys, can I use the aggregation framework to properly query and return on embedded documents? For example, can I create an aggregation query to query embedded documents and return those as the result?
[03:55:20] <crudson> wachpwnski: you can't impose a 'rule' that results in mongodb enforcing that outside of application or maintenance code
[04:05:30] <wachpwnski> crudson: would it be better to just keep a circular queue and reference the user?
[04:15:12] <crudson> wachpwnski: If you want to retain n logins for each user then a single capped collection referencing user would not be a good solution. I'd maintain it on a per user basis. I'd look at, vote for, and keep an eye on this: https://jira.mongodb.org/browse/SERVER-991
[04:20:30] <wachpwnski> right now can I get size then append and pop?
[04:22:26] <wachpwnski> also does mongo have a global collection incrementer I can use?
[04:33:47] <jgornick> When you want to unwind a field, do you need to project that field?
[04:33:55] <jgornick> This is in the aggregation framework.
[04:35:35] <crudson> wachpwnski: note that size of array isn't available unless you track it (other than querying $size). You will have to do some manually work to maintain the array. What do you mean by "global collection incrementer"? Like a traditional db sequence?
[04:35:41] <crudson> jgornick: nope
[04:39:17] <wachpwnski> crudson: I am building a forum, so i basically do /forum/<title>.<counter> for seo
[04:39:54] <wachpwnski> Using sql with an auto_incrementing id this makes sense. is there a way that is more practical with mongo?
[05:39:11] <timeturner> anyone use mongodb in conjunction with solr before?
[05:39:37] <timeturner> or clucene
[05:40:53] <IAD> only sphinx, only hardcore =)
[05:42:14] <ron> timeturner: that's quite a common combination.
[05:54:59] <timeturner> I was looking into clucene too
[05:55:14] <timeturner> there seems to be a node client for it too
[05:55:24] <timeturner> clucene would be the fastest of all of these right?
[05:55:49] <timeturner> solr, elasticsearch, sphinx
[05:57:38] <ron> well, clucene is a port of lucene. I don't know how far back it dates. 'fastest' is a problematic description.
[06:01:16] <timeturner> http://clucene.git.sourceforge.net/git/gitweb.cgi?p=clucene/clucene;a=log;h=HEAD
[06:01:43] <timeturner> 1.5 years ago basically
[06:03:41] <ron> that's relatively old. lucene underwent quite a few changes since. however, that doesn't mean clucene isn't good. nevertheless, you should look at the functionality and not only the 'speed'.
[06:04:59] <timeturner> solr it is
[06:06:58] <_m> eww no
[06:07:13] <timeturner> if the search bar in elasticsearch.org is using elasticsearch then it is really slow
[06:07:29] <_m> ...
[06:07:44] <timeturner> ?
[06:07:46] <_m> More things than the search engine determine the result speed for internets.
[06:08:02] <timeturner> like what
[06:08:10] <_m> The web server, first of all.
[06:08:20] <_m> How many nodes are there? Are those applications failing?
[06:08:39] <_m> We have both solr and elasticsearch running in production
[06:08:52] <_m> I have far more problems with the former than the latter.
[06:09:01] <_m> But hey, do your thing yo.
[06:09:17] <timeturner> google is instantaneous. I would expect at least 1/6 - 1/4 the speed of that
[06:09:25] <timeturner> I'll try out elasticsearch though
[06:09:34] <timeturner> since it looks easier too
[06:11:16] <_m> It's easy to configure/run/maintain
[06:11:26] <_m> I haven't had any problems with slowness
[07:37:29] <[AD]Turbo> hola
[08:27:47] <VinX> hello all, I am looking about how to embed mongodb ( server ) in a c++ application.
[08:32:29] <algernon> I do not thing embedding the server is supported.
[08:40:16] <wiherek> hi. is there a channel specifically for mondodb - nodejs (using 10gen drivers)? or is this the one?
[08:43:25] <ppetermann> wiherek: mongo not mondo, not sure about a node chan though
[08:43:43] <ppetermann> VinX: why you want to embed the server?!
[09:38:20] <NodeX> anyone here use mongoexport for gridfs?
[09:52:22] <xenocom> I cant find any info on this, Im' hoping someone can point me in the right direction
[09:52:28] <arussel> I'm implementing a search using the 'split words and put them in a keywords field' patter
[09:52:29] <arussel> n
[09:52:44] <xenocom> if I want to run mongodb on AWS... or at all, really... how can I compact without pulling my site offline?
[09:52:52] <arussel> ideally, I would do the splitting/setting in an create/update listener, is there something like this ?
[09:52:56] <xenocom> do I rely on my replica sets to fill the void while compacting?
[10:05:40] <NodeX> arussel : can you explain your problem any differently, what you are asking doesn't make sense
[12:08:58] <driek> one of my config servers fails on update config.mongos query, the logs tell me 'exception: Invalid BSONObj size: 0'. I restarted all the config servers but this does not help. Any clue where to look?
[12:10:32] <boll> Is there anything in the mongodb development pipeline focusing on making gridfs work with reasonably sized data sets (ie. 300+ GB worth of file data)?
[12:22:49] <NodeX> what "doesn't" work about it boll ?
[13:00:29] <boll> NodeX: Sorry for the delay
[13:01:03] <boll> NodeX: Anyway, we have 300 GB of stuff in gridfs, and it is causing problems with other writes
[13:01:34] <boll> presumably because mongodb doesn't do anything special with gridfs collections
[13:01:41] <boll> compared to "ordinary" collections
[13:02:15] <boll> So having 300 GB of live data reasonably accessible requires vast amounts of RAM
[13:03:31] <boll> I have an mms-log showing what happens to lock percentages when I drop the 300+ GB gridfs collections
[13:03:51] <boll> https://dl.dropbox.com/u/2233959/lock.png
[13:04:51] <boll> gridfs chunks collections really ought not no be memory mapped
[13:09:54] <amirc> Hi all, Is there a way to count the number of time an item been queried?
[13:10:12] <NodeX> boll: have you not thought of putting a cache in the middle of your app for this?
[13:10:26] <NodeX> amirc : with a counter
[13:13:24] <driek> I cant seem to disable my balancer, 'db.locks.find( { _id : "balancer" } ).pretty()' shows a lock in state 1. Should I just be more patient to let the balancing round finish, or is something wrong/stale?
[13:15:40] <driek> (v2, according to the docs a state of 2 denotes an active lock)
[13:17:17] <amirc> NodeX : You mean to do a 'manual' counter? after each call to run update with $inc
[13:22:17] <NodeX> amirc : yes
[13:22:41] <amirc> NodeX : Thanks
[13:30:21] <driek> is it possible that stale locks are there because of a config server migration? I did stop the server before rsyncing it tho...
[13:30:40] <boll> NodeX: What sort of cache?
[13:30:50] <boll> NodeX: mongo is acting as the cache
[13:31:16] <boll> it stores various resized and modified versions of source images
[13:36:10] <NodeX> if you shift the grid away from mongo you will only compound the problem, the files are mmapes by the operating system for a reason (MRU)
[13:37:41] <boll> NodeX: keeping the files in a flat file system with logarithmic access times currently seems a lot more plausible
[13:38:07] <boll> (logarithmic in the number of files per directory)
[13:38:43] <NodeX> adding an in memory cache to serve them will keep your app flying
[13:39:28] <boll> problem is that LRU isn't really a terribly good algorithm, as the files get hit more or less randomly
[13:39:42] <boll> so figuring out what to keep in memory is not trivial
[13:39:54] <boll> which I guess is what's also causing mongo to choke
[13:40:29] <NodeX> ^^
[13:41:44] <NodeX> you can be smart about it with somethign like redis ... use the id as an expring key, check the key before checking the DB - if exists then serve from memory/cache else serve from Mongo.. will eat a bit of ram / disk but will relieve your system a bit
[13:42:44] <boll> except if old files are as likely to get hit as old ones, that doesn't really gain me a lot, does it?
[13:43:04] <NodeX> your app takes care of invalidating ...
[13:43:37] <NodeX> or do you mean infrequent requests?
[13:44:26] <boll> it's a completely generic image manipulation service (written in php), which downloads images, manipulates them, and returns the manipulated image to the caller
[13:44:37] <boll> it then stores the original and the manipulated image
[13:45:03] <boll> it really has no way of knowing what images to expire
[13:45:30] <boll> since all images are theoretically equally likely to be requested at any given time
[13:45:33] <NodeX> I imagine in your case the original is not really needed after it's been manipulated?
[13:46:21] <boll> The original may be 150 MB, and the caller usually requests 4 or 5 different "versions" of it, so keeping the original around makes a bunch of sense
[13:46:35] <NodeX> but not after it's been manipulated?
[13:47:16] <boll> yeah, cause different callers (or client software) often requests different sizes of the same image (depending on the screen size of the client)
[13:47:47] <boll> so we often end up with maybe 20-50 versions of the same image over time
[13:47:53] <boll> most of them very small
[13:48:08] <NodeX> if it was me I would keep hot data in a cache for X days (dependant on ram) and store the originals outside of mongo after manipulation
[13:48:41] <boll> Yeah, we may end up having to do that, it's just that gridfs is damn convenient :)
[13:48:52] <NodeX> if a new request to manipulate comes down the wire it get's added to the queue (load the original from the FS) and start all over again
[13:49:30] <NodeX> or... setup a second mongod to house originals - as they are infrequently accessed they won't hardly be mmapped
[13:50:18] <boll> Yeah, a hybrid solution may be the way to go. Thanks.
[13:50:42] <NodeX> grid is nice for portability
[13:52:39] <alz> hi
[13:53:00] <alz> does anyone know the correct way to build the shared mongoclient on 2.2.0?
[13:53:04] <alz> --sharedclient doesn't seem to do it anymore
[16:03:05] <gyre007> anyone here uses mongo community cookbook ? we are and just found out that it keeps restarting mongo every time chef-client runs...needless to say the restart forces master to change to slave...and some other node becomes master..
[16:03:10] <gyre007> really annoying :)
[16:45:37] <Almindor> what's the fastest way to update references? say I have a list of users with ObjectIds pointing to some collection A but I want to "exchange" that with another collection B and A is linked to B (each A has some link to B, not vice versa tho!)
[16:45:46] <Almindor> I tried a simple script to walk through them but it's uber slow
[16:47:13] <ron> you mean that updating ALL the records in your collection is slow? who would have guessed...
[16:47:41] <Almindor> ron: 10 per second? come on
[16:47:45] <Almindor> my 286 could do better
[16:47:55] <ron> then run it on 286 ;)
[16:52:45] <ron> seriously though, it probably depends on your script.
[16:53:06] <NodeX> and safe updates
[16:53:12] <NodeX> and index bound updates
[17:00:23] <ron> well, that's not surprising considering... well... considering you CAN'T modify the _id.
[17:03:48] <Vile> ron: it was faster than update even if it would not be for _id
[17:04:24] <Vile> due to the fact that you don't have to re-index, but rather create new indexes only
[17:04:27] <ron> well, the whole idea of actually having to update the _id field baffles me, but okay.
[17:04:47] <Vile> ron: was due to bug :(
[17:04:53] <ron> did you not delete the old entry?
[17:05:11] <Vile> no, i just deleted old colletcion when it was done
[17:05:23] <Vile> and renamed new one
[17:05:30] <ron> I see.
[17:05:54] <ron> well, just goes to teach you that you shouldn't fight against objectid as the _id :)
[17:07:39] <Vile> ron: in my case it makes sense. my ids are unique. Only problem that the software that was feeding them was feeding wrong values
[17:08:08] <Vile> and those should not be modified, just deleted or replaced
[17:08:30] <Vile> (and record size is ~64 bytes)
[17:09:32] <ron> Vile: well, okay.
[17:09:49] <ron> and I only say okay because I'm hungry and going to eat :p
[17:12:45] <Vile> anyway, in that case objectID is completely useless, because will not be used at all
[17:55:10] <Neptu> i have a question about querying
[17:56:01] <Neptu> i have a collection where some fields are A = 1 another ones are A = 2, is it possible to get a query to retrieve all of them that has te bigger integer
[17:56:06] <Neptu> A is like a version
[17:56:14] <Neptu> so i want to get the latest one
[17:56:30] <Zelest> sort by A and limit?
[17:56:42] <Zelest> or am I misunderstanding the question?
[17:58:34] <Neptu> Zelest: the number of elements on A can be different than B
[17:58:42] <Neptu> so limit is kind of not good
[17:58:46] <cmendes0101> find({A: {$gt: 1}})
[17:58:47] <cmendes0101> ?
[18:00:26] <Neptu> not really you know case 1 but can be 3 4 or 5 I want to get the max of it
[18:00:58] <Neptu> my queryng in mongo is quite noob still
[18:00:59] <Neptu> ...
[18:01:20] <cmendes0101> oh, I'm still new to mongo. but I saw that in the Aggregate framework. $max. That might be what you want.
[18:01:41] <cmendes0101> Or you can do a find, sort by A then pull the last or first value, however you sorted
[18:02:14] <cmendes0101> actually do a find, sort then limit 1
[18:02:37] <Neptu> mmmmmmm
[18:03:50] <cmendes0101> find({}).sort({A: 1}).limit(1)
[18:04:59] <cmendes0101> Actually.. Desc: find({}).sort({A: -1}).limit(1)
[18:06:53] <Neptu> cmendes0101: all that will be supported by the cpp driver?
[18:07:36] <cmendes0101> Well that was cli, but I would assume so. There very basic commands. You just need to convert to talk with the cpp driver
[18:07:47] <Neptu> mmm
[18:07:59] <Neptu> donno how much this can take with 19M rows
[18:08:01] <Neptu> let me see
[18:08:48] <cmendes0101> I think thats where the aggregate framework comes into play, it can handle more (I think)
[18:08:59] <Neptu> db.f.find({$min: {name:"barry"}, $max: {name:"larry"}, $query:{}});
[18:09:16] <Neptu> but i need the max of all the number as agregation
[18:11:34] <cmendes0101> I guess I dont know what your dataset looks like or what your doing but check out http://docs.mongodb.org/manual/reference/aggregation/, Search for $max. Its for grouping but maybe it will work for you
[18:12:36] <ludof> Hey, we need some help here for a "special" operation. We're moving from 1 mongos + 20 single shards to 3 mongos + 3 replicated shards. We're using mongodump from the old mongos to grab 800 gigs of data (one collection. Yet the process "almost" fail, it's terribly slow.
[18:14:17] <ludof> I/O are low on the active shard (100 reads/sec, less than 10 writes/sec), ram usage is low, load average is also low, mms give us 2 informations: average of 80 page faults and average of 4Mbits in network.
[18:14:46] <ludof> We have absolutely no insight about why it's very very slow or why it fail.
[18:19:35] <Neptu> i do not understand how i do a $max
[18:20:12] <Neptu> .find({A:'lala',B:{$max: 1}}) ??
[18:26:50] <cmendes0101> Neptu: You do not use the .find, you use .aggregate()
[18:27:06] <cmendes0101> The link I posted has examples of that command
[18:27:40] <cmendes0101> Then as a field in $group, it would be something like $max: {"$A"}
[18:28:11] <Neptu> ok
[18:29:46] <rcreasey> quick question regarding ports for a sharded mongo configuration
[18:30:08] <rcreasey> what ports are default to use for mongod/mongos/config server?
[18:30:13] <rcreasey> http://www.mongodb.org/display/DOCS/Production+Notes
[18:30:25] <rcreasey> mentions 27017
[18:30:46] <rcreasey> but i'm confused about what they're referring to as the 'mongod --shardsvr'
[18:31:02] <rcreasey> is that just the mongod that a mongos connects to in a sharded configuration?
[18:31:47] <rcreasey> http://www.mongodb.org/display/DOCS/Simple+Initial+Sharding+Architecture
[18:31:50] <rcreasey> aha, yeah.
[18:33:44] <cmendes0101> Theres a lot of people here but no one really talks
[18:40:22] <augustl> what's a good way to implement audit trails for documents? I was thinking of having a "history" array on the document itself, copying the entire document in on every change. I guess this'll work, a bit worried about the 16mb limit on document size though
[18:42:03] <ludof> augustl: a version key in your collection
[18:42:08] <ludof> version incremented
[18:54:28] <Neptu> dam i cannot find an agregation of a max of that row...
[18:58:48] <augustl> ludof: the reason I liked the history array was that I thought that would be an atomic operatoin, but on second thought it won't.. So I might as well use that
[18:59:18] <ludof> if you need atomic
[18:59:37] <ludof> augustl: why just don't use version key + safe write on this collection ?
[19:03:05] <augustl> ludof: I'll look into safe writes, haven't heard of them before
[19:05:49] <Neptu> "errmsg" : "exception: a group specification must include an _id", wtf
[19:09:33] <cmendes0101> Neptu: pastebin your code
[19:10:14] <cmendes0101> maybe sample document or something too
[19:14:26] <Neptu> i think im doing this in a couple of querys
[19:15:25] <arussel> NodeX: still here ?
[19:16:10] <arussel> I want to update a field of a document when the document is updated/created. Is there something like listener/trigger I could use.
[19:17:32] <ron> there are no triggers in mongodb.
[19:20:51] <arussel> what would be the way to create/update a keyword field using other field ?
[19:21:16] <arussel> does it has to be outside mongo ?
[19:21:20] <ron> can you give a more elaborate sample?
[19:21:33] <arussel> yes, mongo doc :-)
[19:22:19] <arussel> http://www.mongodb.org/display/DOCS/Full+Text+Search+in+Mongo
[19:22:31] <arussel> -> Text search
[19:22:39] <NodeX> arussel : that's an appside problem (your update problem)
[19:22:46] <arussel> I want _keywords to be updated inside mongo
[19:23:13] <arussel> in most SQL db, I could use a trigger to update the field
[19:23:30] <NodeX> use SQL then ;)
[19:24:19] <arussel> is there a reason for not implementing it or is it just that no one motivated enough needed it
[19:24:52] <NodeX> because a key value store is built for speed and schemaless properties and most of all scalability
[19:25:30] <NodeX> things like that should -allways- be appside logic, it just so happens a lazy developer added them directly into SQL
[19:25:46] <ron> actually, I believe it's a matter of demand. should the community ask for it, they may consider implementing it. it _will_ complicate sharded databases and such.
[19:26:13] <NodeX> and slow things down to much so don't hold your breath
[19:27:00] <ron> well, wrt to slowing things down, I believe it's up to the user to decide what's more important to them.
[19:27:22] <ron> that's my view, though.
[19:27:43] <NodeX> it would slow down the core as i's effectivly an extra index to check for what to do / what to update
[19:27:51] <arussel> I'm very new to mongo so apologies for question, but isn't a document always persisted as a whole (so sharding doesn't really matter).
[19:27:56] <NodeX> so I really don't see it being implemented ever
[19:28:12] <arussel> if I want key value store I use redis
[19:28:29] <NodeX> lol @ sharding
[19:28:37] <ron> well, some other nosql dbs offer triggers (to an extent), so I wouldn't say 'ever'.
[19:28:44] <NodeX> what about other documents when your collection grows out of your RAM?
[19:29:00] <NodeX> ron : "other" being the operative word ;)
[19:29:21] <ron> NodeX: it's a cutthroat business, dude ;)
[19:29:39] <NodeX> yer fortunatley mongo seems to lead the way in NoSQL type datastores
[19:29:57] <ron> for better or worse.
[19:30:20] <NodeX> mongo is not trying to replace SQL, if people want triggers or other SQL based things then use SQL l;ol
[19:30:28] <arussel> well, that was not the expected answer, but thanks for your help :-)
[19:30:41] <NodeX> what was the epxected answer?
[19:30:45] <NodeX> expected*
[19:30:52] <ron> 'it can be done' ;)
[19:30:56] <arussel> dude, you want to put a trigger, just do this and that
[19:31:08] <NodeX> [20:21:19] <NodeX> arussel : that's an appside problem (your update problem)
[19:31:12] <NodeX> Like that you mean?
[19:31:28] <NodeX> ;)
[19:31:53] <arussel> no more like: That could be an appside problem, but if you want, you can do it inside mongo too.
[19:32:18] <NodeX> but you can't do it in mongo so why would I say you could lol
[19:33:36] <NodeX> and regarding your sharding issue/comment .. documents are more analogous to rows in SQL and collections to tables
[19:33:55] <NodeX> A single document probably wouldn't get sharded
[19:34:43] <arussel> then updating a field of a doc using field from the same doc wouldn't be a problem in a sharded DB.
[19:34:53] <arussel> "Your code must split the title above into the keywords before saving. Note that this code (which is not part of Mongo DB) could do stemming, etc. too. (Perhaps someone in the community would like to write a standard module that does this...)"
[19:35:24] <arussel> when they say module, does this mean the module would update the doc, and the code would talk to the module ?
[19:38:04] <NodeX> if you're looking to do LIKE %% style queries in mongo you're going to be disapointed
[19:39:16] <Neptu> sort.($lala: -1) should give me a reverse list starting from the highest number to the smaller number right??
[19:39:22] <arussel> I need to implement a simple full text search and try to find the best way to do it
[19:39:30] <Neptu> sort.($lala: -1) should give me a reverse list starting from the highest number to the smaller number right??
[19:39:41] <NodeX> Neptu : ye
[19:39:49] <arussel> should my client be using mongodb is very debatable, but not the point.
[19:39:57] <NodeX> arussel : look at Solr/elastic search
[19:40:21] <NodeX> (for your FTM needs)
[19:41:27] <arussel> thanks
[19:42:08] <Neptu> NodeX: does sort work fine with Numberlong() ??
[19:42:28] <NodeX> as far as I know Neptu
[19:42:43] <NodeX> it wouldn't be much good if it didn't!!
[19:44:27] <Neptu> http://pastebin.com/1RX1pmsr
[19:44:30] <Neptu> check this out
[19:44:39] <Neptu> because does not make sense to me
[19:46:17] <NodeX> sort({ct:-1});
[19:46:21] <NodeX> not $ct
[19:46:24] <Neptu> ok
[19:47:01] <Neptu> now we are talking :)
[19:47:05] <NodeX> ;)
[19:49:10] <Neptu> still did not figure out how agreggation works properly
[19:49:11] <Neptu> is a shame
[19:49:36] <NodeX> aggregation would use $ct as a var name in sort
[19:49:45] <NodeX> so you weren't totaly mad!
[19:50:21] <Neptu> ok
[19:51:02] <Neptu> how i will write an agregation taht gives me max on ct and and all of the vf
[19:51:27] <NodeX> the sum of vf?
[19:52:58] <Neptu> no
[19:53:12] <Neptu> i want all vf when ct is max
[19:53:28] <Neptu> .aggregate($group : { _id:'$vf','ct':{$max : 1} });
[19:56:23] <NodeX> you can pretty much build it from the examples here http://docs.mongodb.org/manual/reference/aggregation/
[19:58:12] <Neptu> im following exactly that on the group example
[19:59:10] <NodeX> perhaps you just need to change your group and add a match
[19:59:39] <Neptu> ?
[19:59:55] <NodeX> I think the logic should be $match : {ct:{$max:1}},$group:{_id:'$vf'}...
[20:00:03] <Neptu> aggregate($group : {'ct': { $max : '$ct'} });
[20:00:19] <Neptu> well im trying this kind of aproach
[20:00:26] <Neptu> let me follow your advice
[20:00:47] <NodeX> My brain is fried for the day, if you can wait until tomrrow I'll help you work it out
[20:02:02] <Neptu> SyntaxError: missing ) after argument list (shell):1
[20:02:10] <Neptu> strange
[20:02:18] <NodeX> yes mine was psudeo lol
[20:02:25] <NodeX> psuedo*
[20:02:44] <NodeX> grab a match example from the docs and change the values ;)
[20:03:58] <Neptu> ai ai
[20:04:45] <Neptu> can u have a limit and a match?
[20:05:20] <gigo1980> hi how can i make in an sharding collection inside an map of an MR an db.mycollection.find() ?
[20:05:45] <NodeX> Neptu : I've never tried, I have to go get some sleep. GOod luck
[20:05:53] <Neptu> thanks
[20:05:54] <Neptu> bye
[20:09:00] <Neptu> mmmmmmm when u do a find().sort() the find gets executed first and then it sorts it out??
[20:09:09] <Neptu> or it sorts it before the find?
[20:30:10] <_Tristan> I have users and items. Users have many items, and most of my reads will be looking up an individual item by it's id, along with it's owner's name. My question is, should I store users and items in separate collections, or should I store the items within the user's document?
[20:32:48] <_m> Store the items in a separate collection
[20:38:59] <cmendes0101> Neptu: Not 100% but it should do find first
[20:46:14] <mcav> If I have an index of {a: 1, b: 1}, is it redundant to have separate indexes for {a: 1} and {b: 1}? A quick test of the repl seems like it uses a btreecursor even if querying for just "a".
[20:51:49] <mcav> nm, found it in the docs
[21:25:47] <tystr> hmm
[21:26:09] <tystr> anyone happen to know if updating mongo with yum will overwrite my config files?
[21:26:13] <tystr> I'm assuming no
[22:22:06] <cmendes0101> How can I round a number in aggregate framework?
[23:45:58] <_m> cmendes0101: Assuming you'll need to do something like: http://bit.ly/bZfT7N
[23:46:38] <_m> Also see the "skip" examples on the same page.
[23:47:10] <_m> While they aren't exactly "aggregation framework" docs, you'll likely be able to do the same sort of operations.
[23:48:03] <cmendes0101> Well I need to round a number. Like 2.5 to 3.
[23:48:23] <cmendes0101> and im using aggregate(). Not sure how to execute something like that in there
[23:49:19] <_m> Again, that same method *should* work. Try it.
[23:49:36] <_m> Either that, or map/reduce your data.
[23:50:38] <cmendes0101> I've tried many options. I haven't specifically tried a map reduce since I didn't think I can execute that within the aggregate command but I'll see what that triggers
[23:51:15] <cmendes0101> Basically I'm doing $group to group by an ID and those records contain a number of seconds. I'm trying to get minutes rounded up. I have the minutes returning but there not rounded up, so the totals are not correct