PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 16th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:45:06] <bufferloss> how should I model joined data in a document store?
[00:49:13] <bufferloss> like what if I have a typo, such as "Californea" and I need to change that to "California"
[00:49:29] <bufferloss> if it's in a bunch of places, literally, then I have to search and replace in lots of documents in lots of fields, right?
[00:49:39] <bufferloss> is it still reasonable to use "joins" for this type of thing?
[00:49:45] <bufferloss> like a "states" collection?
[00:53:18] <Boomtime> bufferloss: consider that you appear to be trying to optimize an irregular condition - sure, typo's occur, but unless they make up the majority of your operations you shouldn't be too concerned with what it takes to perform - optimize the _common_ operations
[00:54:02] <bufferloss> Boomtime, well, so then lets say it's not typos
[00:54:17] <bufferloss> lets say there's legitimately some times when, not for a typo reason, an update in one place must be made in others
[00:54:59] <bufferloss> Boomtime, also, I'm working with or have encountered in my problem domains some data structure approximately similar to this: http://pastie.org/9834385
[00:56:21] <bufferloss> Boomtime, so, for example, if an agent alias changes on the level of available aliases, then that change should reflect down to the agents
[00:57:34] <bufferloss> Boomtime, here's maybe a slightly better example: http://pastie.org/private/2mc9nz4e8lsygevgmstq
[00:57:54] <bufferloss> Boomtime, lets say that "white spider" is a call sign that means something important, e.g. when it's used, the agent is to pick up a package at a given location
[00:58:14] <bufferloss> then lets say "white spider" is discovered by the enemy, so the call sign is changed to "brown recluse" or something
[00:58:29] <bufferloss> Boomtime, I would need/want to change the "aka" field of anything that had been white spider
[00:58:58] <bufferloss> I'm also, leaving out a tad bit of data here, such as for example a flag indicating the usage/purpose of the call sign for example, but I'm just saying I come across very similar things in some of the problem domains I work with
[00:59:31] <bufferloss> just, the need to have some kind of "join style" behavior that usually tends to become cumbersome if I need to maintain a running record/log/knowledgebase of all the places that a thing may need to be updated
[01:00:03] <bufferloss> Boomtime, so is it still generally recommended that I just make those updates? or is it reasonable to use something similar to a foreign key and a join as I might have used in a "traditional" RDBMS
[01:13:40] <Boomtime> bufferloss: although a "join" is not possible, you can still use references and resolve them yourself all you like and, yes, it is reasonable, sometimes even unavoidable
[01:14:09] <bufferloss> Boomtime, ah ok, what are references, and by that, I mean, where would I find the docs on that? :)
[01:14:17] <Boomtime> my point before was about challenging the assumption that storing the same data twice is a bad thing
[01:14:48] <Boomtime> bufferloss: a reference is a generic term, a placeholder for whatever you choose to use to refer to a link between data
[01:15:11] <Boomtime> there are no joins in mongodb
[01:15:28] <bufferloss> Boomtime, ok, so within the same query I can't cobble together a document that comes from two separate collections right?
[01:15:36] <cheeser> no
[01:15:36] <Boomtime> correct
[01:15:39] <bufferloss> ok thanks
[02:11:15] <jayjo> Is there a functional difference between running a designated database server on AWS or Azure, or hosting a VM that solely runs a server daemon?
[02:17:18] <jayjo> If my goal is to run a designated mongo server, and AWS and Azure don't give that option (but they do give the option for SQL, Postgres, MySQL), am I losing out on something by spinning another instance that's only purpose is to host the databse?
[02:17:38] <jayjo> Are the other options optimized in some way
[02:18:26] <Boomtime> https://aws.amazon.com/marketplace/cp/MongoDB
[02:19:19] <Boomtime> jayjo: you can either deploy your own system, as with any database product, or get somebody to host it for you, as with any database product - i'm not sure what difference you are looking for
[02:19:53] <Boomtime> one would hope that any hosting provider which hosts an instance for you does so with some knowledge of how to optimize it
[02:26:40] <jayjo> Boomtime: great resource, thanks. It looks like they're running Amazon Linux. I guess my question is less mongo-specific, but I am using mongo. Is there an advantage to using the services in the link you sent me or deploying your own system? Other than standard sys admin tasks, is the machine optimized for databases?
[02:27:14] <jayjo> Other than eliminating the admin tasks from what you're required to do, I mean
[02:28:03] <Boomtime> i have no idea for those ones, those are the AWS hosted directly images - but the same is true for MSSQL/Oracle/etc.. how much are the images on AWS optimized for these? and how do you know?
[02:29:05] <Boomtime> to the question: "can you do better by optimizing a machine yourself" almost certainly the answer is yes in every case
[02:30:02] <Boomtime> the machines you purchase from providers aren't optimal for your case - they are the middle of the road that is most applicable for most use cases
[02:30:41] <Boomtime> you don't purchase hosting for optimization, you purchase it for convenience
[02:31:18] <jayjo> Right, understood.
[02:31:47] <jayjo> But it seems that the machine images that have the database hosted are just VMs that save your from installing and setting up your databse yourself.
[02:32:19] <jayjo> base level they are all just EC2 instances, solely running the databse
[02:32:32] <Boomtime> sure, what else would you expect?
[02:33:30] <jayjo> I don't know, I guess. I'm learning.
[02:33:57] <Boomtime> when you purchase hosting you do so on specification (cpu, ram, disk) - regardless of how that specifciation is met you should get what you paid for
[02:34:26] <harttho> word
[02:36:52] <jayjo> OK thanks.
[04:36:37] <here4thegear> whenever I type mongod at command line I get an error addr already in use and it closes down. Is there a way to allow multiple connections?
[04:38:09] <cheeser> mongod starts the server. mongo is the client.
[04:38:34] <here4thegear> cheeser ah thanks..
[09:48:40] <davy> hi guys, i have the following data : {"data" : "test", "details" : [ {'a' : 'a', 'b':'b' }, {'c' : 'c', 'd':'d' }]}, is there a way to add a new key in every hash i have into the details array ?
[10:09:33] <ammbot> Anyone know erlang client that support replica-set ?
[10:17:29] <Guest41220> hi, morning
[10:18:24] <gregf_> i need to install the latest mongo client on ubuntu 12.04
[10:19:25] <gregf_> i've followed some steps from here: http://www.mkyong.com/mongodb/how-to-install-mongodb-on-ubuntu/
[10:19:47] <gregf_> i previously had version : 2.0.x
[10:20:27] <gregf_> after following the steps mentioned in the link(uninstall) it got upgraded to 2.4.12
[10:20:51] <gregf_> i need the latest one 2.6.x. please if someone could help
[10:20:52] <trepidaciousMBR> What happens if I use a Java driver version higher than the version of the database I'm connecting to?
[10:24:52] <trepidaciousMBR> Ah, in fact it looks like there is no actual correspondence between the Java driver version and Mongo version - driver is at 2.12 and mongo at 2.6?
[10:28:46] <gregf_> ah nevrmind. found it :)
[12:53:21] <Raffaele> hello. I'm making a change to cross-compile for MIPS. I hit a compilation failure because it tries to use posix_fallocate() which isn't supported by the uclibc for my target platform
[12:54:36] <Raffaele> I have a couple of ways of fixing this but I was wondering if there's a need to discusss these options or I just do the change and once I'll submit the patch it'll get reviewed
[12:54:45] <Raffaele> hope it makes sense
[13:09:12] <StephenLynx> from what I read
[13:09:17] <StephenLynx> discussing it is encouraged on the forums
[13:09:40] <StephenLynx> but I can't tell you for sure. I just know they have clear guidelines for contributions
[13:18:51] <olso> hey guys, im using mongoose. i need a way to map or iterate over a large collection (150k documents)
[13:19:41] <cheeser> i'm not a mongoose guy, but what's not working?
[13:20:37] <olso> i am not sure what to look for cheeser
[13:20:54] <cheeser> when you ran your code, what error did you get?
[13:22:42] <olso> its not an error, im just not sure what to do because the ID is some hash
[13:22:45] <olso> cheeser
[13:23:01] <olso> im used to autoincrement
[13:23:18] <cheeser> the _id field is probably an ObjectId
[13:23:21] <StephenLynx> yeah
[13:23:25] <StephenLynx> you can use that or set your own id
[13:23:34] <StephenLynx> when inserting the objects
[13:24:14] <StephenLynx> I prefer to use something of my own because I know mongo won't act upon it on its own
[13:24:34] <cheeser> i just let the db/driver assign the ID because IDs are meaningless.
[13:24:39] <olso> are hashes as ID more performant or why is the default ID type a hash?
[13:24:50] <cheeser> the default is not a hash
[13:26:15] <olso> i hope that .reIndex function can do something that can change ID so i dont have to recreate the collection
[13:26:28] <cheeser> _id can not be changed once set
[13:28:19] <olso> actually http://mongoosejs.com/docs/api.html#model_Model-increment is maybe what im looking for?
[13:28:31] <olso> no wait
[13:28:33] <olso> nope
[13:31:58] <olso> cheeser, maybe http://mongoosejs.com/docs/api.html#query_Query-stream would work for me?
[13:32:31] <cheeser> work in what way? you haven't said what's broken in the first place.
[13:32:54] <StephenLynx> you can always iterate though a pointer list obtained with find
[13:34:57] <olso> i DONT want to insert the whole collection into the code with collection.find() and then iterate over it
[13:35:10] <cheeser> find() returns a cursor...
[13:35:27] <cheeser> i think you're pre-worrying about a problem that doesn't exist
[13:37:35] <olso> robomongo cunfused me i think, it returned all data from collection with .find()
[13:37:46] <olso> I think that this http://stackoverflow.com/a/24222084 is what im looking for cheeser
[13:40:43] <cheeser> well, that's different from how most of the other drivers work. java and c#, e.g., just iterate the cursor. they don't store the entire result set in memory.
[13:41:00] <cheeser> i'm not *entirely* sure that's what http://mongodb.github.io/node-mongodb-native/api-generated/cursor.html#each is saying either.
[13:41:37] <cheeser> i think it's just saying that it holds the entire batch in memory which would be consistent with the other drivers.
[13:59:23] <Lujeni> Hello - can i add an downtime or ack a specific host or alert througt MMS or the public API ? thx
[14:09:56] <StephenLynx> cheeser then use a find
[14:10:31] <StephenLynx> with a query
[14:14:35] <cheeser> StephenLynx: hrm?
[14:21:27] <MadWasp> hello guys, i’m using mongodb with spring-data-mongodb repositories. my problem are some inserts in my application, they take 33-40ms per insert. is that a normal behavior?
[14:25:29] <MadWasp> i’m doing them in a loop but when i save them all at once, the time it takes is just the sum of all the single times
[14:58:51] <StephenLynx> cheeser find({myid:456})
[15:03:22] <cheeser> StephenLynx: why are you telling me that?
[15:03:36] <StephenLynx> oh
[15:03:37] <StephenLynx> yeah
[15:03:39] <StephenLynx> wrong person :v
[15:03:44] <cheeser> :)
[15:04:06] <StephenLynx> it was to the dude that thought he had to iterate through his collection in the code
[15:05:11] <MadWasp> can somebody help me with my problem? :(
[15:06:20] <StephenLynx> my suggestion is to stop using spring
[15:06:33] <StephenLynx> and go vanilla with mongo.
[15:06:40] <StephenLynx> try that and benchmark it.
[15:12:48] <MadWasp> i took some further looks and setting writeConcern to UNACKNOWLEDGED or ERRORS_IGNORED fixes the problem. is that a good solution?
[15:14:01] <StephenLynx> if it fits, you sits :3
[15:14:13] <StephenLynx> like, that is just meant to provide safety for you
[15:14:21] <StephenLynx> if you don't need that, then it's ok to turn it off.
[15:14:59] <StephenLynx> but seriously, I would give vanilla a shot.
[15:15:16] <MadWasp> ok, i’m gonna try that
[15:15:30] <MadWasp> we’re using a replication cluster over 3 nodes, can that be an issue?
[15:22:26] <StephenLynx> don't know, never used replication or clusters
[15:22:32] <MadWasp> ok
[15:30:57] <kakashiA1> hey guys, does mongoose have any kind of build in promises?
[15:44:10] <StephenLynx> no idea. tried asking in their channel?
[15:46:43] <royo1> Hi, I have a question regarding indexes and aggregation in mongodb.
[15:46:43] <royo1> Say I have a collection which stores documents that contain a polygon representing an area, as well as another embedded document with name, financial transactions and other information (with raw size between 256kb and 1mb which need indexes). Say net profit. And May want to search for net profit for polygons that my coordinate falls within. Will I be able to create an index or combination of indexes that cover the queries if I am using the db.a
[15:46:43] <royo1> ggregate function?
[15:48:21] <StephenLynx> use a field of a sub-object as part of the index?
[16:25:04] <Guest37025> We meet again royo
[16:25:12] <royo1> yes. hello
[16:25:27] <royo1> hello Guest37025 .
[16:25:34] <royo1> nice number you have at the end of your name
[16:30:49] <cheeser> he's not jsut a number... he's a name and number.
[16:34:14] <Guest37025> i am not an animal
[16:34:38] <LB|2> smh
[16:34:58] <Guest37025> for anybody who has some advice in terms of mongo
[16:35:13] <Guest37025> I am in the process of creating a geospatial collection that will need to scale
[16:35:34] <Guest37025> I want to look embedded documents ranging from 256 k to 5 mb in each of the geospatial documents
[16:35:58] <Guest37025> My fear is over time there will be so many documents that when I do a geospatial search based on all points from a given point that the read query will become extremely slow
[16:36:11] <Guest37025> I want to know how well mongo scales with geospatial indexes with embedded data
[16:36:20] <Guest37025> and how the performance works compared to doing 2 round trips in mongo
[16:36:26] <Guest37025> first trip getting ids
[16:36:39] <Guest37025> second trip getting data from a second collection based on those ids
[16:36:40] <Guest37025> also
[16:36:49] <Guest37025> does postgis scale better and read a lot faster than mongo
[16:37:12] <Guest37025> the problem with postgis is i would do a query to postGis for one round trip to get ids
[16:37:25] <Guest37025> then deal with the latency of going to mongo for a second roundtrip to get info based on these days
[16:37:36] <Guest37025> any suggestions on how to set up this data and which is better is appreciated
[16:38:01] <LB|2> can you provide a sample data of how the proposed collection will look like?
[16:54:06] <syadnom> hi all. mongo newbie/moron here. need some quick help. I have collection with 9 objects in it. I want to output just 2 pieces of data from each object. db.camera.find().pretty() gives me everything, I just want the 'host' and the 'UUID' fields.
[16:54:36] <LB|2> the second parameter in the find will allow to project the fields you want
[16:55:00] <LB|2> db.camera.find({},{host:1,UUID:1}).pretty()
[16:55:03] <StephenLynx> that
[16:56:22] <syadnom> ok, for the sake of learning (your answer gave me good output), how does that contruct work?
[16:56:45] <syadnom> find( {} < what curly braces here?
[16:57:05] <LB|2> the first curly braces is where your query goes
[16:57:39] <syadnom> ok ,so within the first I would add whatever conditions for the find, then the second is the output?
[16:57:54] <LB|2> correct
[16:59:01] <syadnom> so, db.camera.find({ "host": "host ip"},{host:1,uuid:1}) < this filters the output
[16:59:07] <LB|2> when you perform a normal db.camera.find() it is equivalent to Select * from camera
[16:59:25] <LB|2> correct
[16:59:37] <syadnom> ok, good. I'm a sql admin, mongo is a mindf*ck lol
[16:59:47] <LB|2> ha
[17:00:21] <syadnom> so, can I output this as a line of csv easily?
[17:01:39] <syadnom> looks like mongoexport might do that for me
[17:02:13] <LB|2> yup
[17:04:17] <syadnom> bingo. mongoexport --port 7441 --db av --collection camera --csv --fieldFile fields
[17:05:54] <jiffe> so from what I understand mongodb doesn't cleanup disk usage after deletions / migrations
[17:07:06] <LB|2> no it doesn't last time i head
[17:07:31] <LB|2> i remember hear that at the mongo db conf
[17:07:42] <LB|2> the 'Swiss Cheese' affect
[17:08:17] <LB|2> however, there was a way to correct that but unfortunately the solution escapes me at the moment
[17:09:30] <syadnom> alright, next bit. I have this query: db.recording.find({},{cameraUuid:1,endTime:1}) the end time is in 'numberlong', how can I get this to output in a 'date' format?
[17:10:04] <LB|2> are you familiar with javascript?
[17:10:05] <jiffe> looks like repairDatabase will do it, but I would need more than 50% available space
[17:10:23] <syadnom> not really.
[17:11:04] <jiffe> or I could delete all data and resync from another replica
[17:15:04] <LB|2> @syadnom is the enddate a epoch time stamp?
[17:15:44] <syadnom> LB|2, I think so. the linux date command is able to convert it to something that seems legit. `date --date=@endTime` works.
[17:16:25] <LB|2> here is a possible solution
[17:16:35] <LB|2> db.recording.find({},{cameraUuid:1,endTime:1}).forEach(function(doc){
[17:16:35] <LB|2> var endTime = new Date(doc.endTime);
[17:16:35] <LB|2> db.recording.save(doc);
[17:16:35] <LB|2> })
[17:17:21] <LB|2> db.recording.find({},{cameraUuid:1,endTime:1}).forEach(function(doc){
[17:17:21] <LB|2> var endTime = new Date(doc.endTime);
[17:17:21] <LB|2> doc.endTime = endTime;
[17:17:21] <LB|2> db.recording.save(doc);
[17:17:21] <LB|2> })
[17:17:36] <LB|2> the second one is the correction
[17:18:54] <syadnom> I may just do this in shell. I'm already way outside my knowledge in mongo (obviously). I'm just doing a sanity check on these numbers to see if date is giving me proper output
[17:19:50] <syadnom> ls
[17:19:54] <syadnom> ignore that
[17:21:31] <syadnom> dang, numbers don't jive.
[17:24:28] <LB|2> sorry to hear
[17:25:28] <syadnom> LB|2, thanks for the help.
[17:25:58] <LB|2> no problem
[18:02:43] <Guest37025> LB have you ever built a payments system with someone and was it the greatest experience you have ever had?
[18:03:25] <LB|2> no it wasn't. The person I worked couldn't remember a damn thing
[18:03:38] <Guest37025> perhaps in long term memory
[18:03:44] <LB|2> He would forget everything he wrote the day before
[18:03:44] <Guest37025> maybe his/her brain worked like ram
[18:03:57] <Guest37025> more likely he would forget everything while he was writing it
[18:10:05] <Tyler_> Hello $mongodb!
[18:10:29] <Tyler_> I'm trying to do a find and limit it to the most recent message of every conversation
[18:11:07] <Tyler_> So I'm looking for sender: any person, recipient: logged in person. And then limit it to 1
[18:11:31] <Tyler_> but I think if I limit it, I'm only going to get 1 record... when I want to limit it to one record for every group of messages
[18:11:51] <StephenLynx> use the command aggregate and add a group block.
[18:12:00] <LB|2> +1
[18:12:01] <Tyler_> thanks StephenLynx!
[18:12:04] <StephenLynx> no problemo
[18:12:13] <Tyler_> aggregate = mongodb command?
[18:12:17] <StephenLynx> yes
[18:12:21] <Tyler_> ok, I'll look it up
[18:12:28] <StephenLynx> is like a find, but allows for much more.
[18:12:30] <Tyler_> can you type a quick example so I know what I'm looking at please?
[18:12:36] <Tyler_> oh nice
[18:13:05] <Tyler_> db.collection.aggregate(sender:personA, receiver:logged in user)
[18:13:18] <StephenLynx> no, you have to specify what is the purpose of each block
[18:13:28] <LB|2> here is a ref
[18:13:29] <StephenLynx> is not like a find that it knows the first is a match and the seconda projection
[18:13:29] <LB|2> http://docs.mongodb.org/manual/reference/method/db.collection.aggregate/#db.collection.aggregate
[18:13:50] <StephenLynx> keep in mind its usage varies depending on the driver.
[18:13:55] <Tyler_> I love you guys
[18:13:57] <Tyler_> Thank you!
[18:14:02] <StephenLynx> so make sure you look into your language's driver specifications
[18:14:20] <Tyler_> I'm using javascript/node.js/express
[18:14:23] <Tyler_> and angular front end
[18:14:28] <StephenLynx> if I remember correcly
[18:14:39] <StephenLynx> in node you must give an array of objects
[18:14:46] <Tyler_> yeah toArray is necessary
[18:14:49] <StephenLynx> no
[18:14:52] <StephenLynx> it isn't
[18:14:55] <Tyler_> lol oh
[18:14:59] <StephenLynx> let me check my code
[18:15:18] <StephenLynx> aggregate already returns an array
[18:15:24] <StephenLynx> unlike find that returns pointers
[18:15:45] <StephenLynx> https://gitlab.com/mrseth/bck_leda/blob/master/operations.js line 204
[18:16:10] <StephenLynx> in your case you will set to _id the value that you want to use to group the results.
[18:16:24] <StephenLynx> you don't explicitly group stuff.
[18:16:26] <StephenLynx> you use _id
[18:19:12] <Tyler_> hmmmm
[18:19:59] <Tyler_> do I need $match in there StephenLynx?
[18:20:28] <StephenLynx> unless you want every single object from your collection
[18:20:29] <StephenLynx> yes.
[18:20:59] <StephenLynx> in an aggregate everything is optional.
[18:21:09] <Tyler_> hmmmm
[18:21:15] <Tyler_> weird. When I do $match it breaks
[18:21:23] <StephenLynx> like everything in a "select * from table"
[18:21:24] <Tyler_> but $group returns every single object in the collection
[18:21:44] <StephenLynx> remember to push in your group
[18:22:16] <Tyler_> Oh I see. There's $match and $group in there as parameters
[18:22:36] <Tyler_> so I have $group for receiver: logged in user
[18:22:47] <Tyler_> and you were saying I need a $match in there too? Anything else?
[18:23:01] <StephenLynx> a project is good
[18:23:02] <StephenLynx> too
[18:23:13] <StephenLynx> because otherwise it will just gives you everything of the objects
[18:23:26] <StephenLynx> similar to * instead of specifying the fields you wish to read.
[18:24:17] <Tyler_> hmmm
[18:24:20] <StephenLynx> now, you want one record for every group.
[18:24:24] <Tyler_> when I add match in there, the code breaks
[18:24:27] <Tyler_> yes sir
[18:24:28] <StephenLynx> show me your model.
[18:24:33] <Tyler_> the schema?
[18:24:36] <StephenLynx> yes
[18:24:43] <Tyler_> we don't really have one but I can show you the json
[18:24:45] <Tyler_> one sec
[18:25:06] <StephenLynx> if you don't document your model, it will be really messed up.
[18:25:13] <StephenLynx> because mongo doesn't enforce schemas.
[18:25:24] <StephenLynx> it just inserts stuff
[18:25:28] <StephenLynx> and reads them
[18:25:46] <Tyler_> { _id: 54b80335984265fe0ba495f6,
[18:25:46] <Tyler_> receiver: 5485ef86028950a880f7f878,
[18:25:46] <Tyler_> sender: 5485ef86028950a880f7f878,
[18:25:48] <Tyler_> message:
[18:25:50] <Tyler_> { read: false,
[18:25:52] <Tyler_> sender: [Object],
[18:25:54] <Tyler_> message: 'message2',
[18:25:56] <Tyler_> subject: 'message' },
[18:25:58] <Tyler_> date: Thu Jan 15 2015 12:13:09 GMT-0600 (CST) },
[18:26:03] <Tyler_> the other guy coded that, so it isn't super clean
[18:26:16] <StephenLynx> a pastebin would be better, but lets take a look
[18:26:22] <Tyler_> lol thanks
[18:26:26] <StephenLynx> i can't see an array there.
[18:26:27] <cheeser> yes. use a pastebin
[18:26:27] <Tyler_> sorry
[18:26:36] <Tyler_> #n00b
[18:26:44] <StephenLynx> what is your logic to group them?
[18:26:53] <Tyler_> recipient
[18:26:56] <Tyler_> sender
[18:27:03] <Tyler_> right?
[18:27:12] <StephenLynx> don't know, its your system :v
[18:27:16] <Tyler_> hahaha
[18:27:20] <StephenLynx> you wish just one message per recipient?
[18:27:22] <StephenLynx> is that it?
[18:27:23] <Tyler_> Okay so the recipient = user
[18:27:30] <Tyler_> and I want one message per sender
[18:27:34] <StephenLynx> ok
[18:27:41] <StephenLynx> and from all these messages
[18:27:44] <StephenLynx> how do you select this one?
[18:28:32] <Tyler_> find({recipient: logged in user}) and $group by sender
[18:28:52] <StephenLynx> wait
[18:29:00] <StephenLynx> so you wish all messages the logged in user received
[18:29:04] <StephenLynx> and group them by sender?
[18:29:12] <Tyler_> I'd like all the conversations
[18:30:13] <Tyler_> so when I do find({recipient: logged in user and sender: person A OR recipient: person A & sender: logged in person}), that gives me 1 conversation
[18:30:22] <Tyler_> and I'm trying to get a list of conversations
[18:30:56] <Tyler_> and I'm learning how to use the aggregate function to do that
[18:33:14] <StephenLynx> so you want the messages the logger user received
[18:33:23] <Tyler_> yes sir
[18:33:35] <StephenLynx> and what is the role of the sender?
[18:33:50] <Tyler_> limit 1 message per sender
[18:33:54] <Tyler_> to most recent one
[18:33:56] <StephenLynx> ok
[18:34:11] <StephenLynx> that will be a little complex, but can be done
[18:34:16] <Tyler_> nice
[18:34:24] <Tyler_> This isn't a common query for you guys?
[18:35:00] <StephenLynx> are the messages already ordered from oldest to newest?
[18:35:16] <Tyler_> I did a sort by date
[18:35:26] <StephenLynx> but are they saved like that?
[18:35:31] <Tyler_> nope
[18:35:40] <StephenLynx> ok
[18:35:41] <Tyler_> there's a date object and the sort is -1
[18:37:49] <Tyler_> so would it be $project: date?
[19:05:39] <StephenLynx> back
[19:05:44] <StephenLynx> first you do a match with the receiver
[19:05:55] <StephenLynx> then you do a project with just the information you want from it
[19:06:06] <StephenLynx> when you do a sort with the time
[19:07:06] <StephenLynx> then you group and just set the data based on the sender. because it has been already sorted, it will keep the last message
[19:07:16] <StephenLynx> from that sender
[19:32:31] <DrLester> Im adding lots of records in parralel into mongodb and sometime those records will have the same name and if its the case i don't want to add them. In other word id like the name to be unique. is there a way to do that in mongodb ?
[19:32:50] <cheeser> a unique index?
[19:33:12] <DrLester> if i add that and i try to add a record with an existing name it will give me an error ?
[19:33:37] <cheeser> that's how unique indexes work, yes. just like any database.
[19:37:24] <DrLester> and can i have multiple index ?
[19:38:31] <cheeser> of course
[19:41:03] <DrLester> i tried that but i still have name duplicate: collections.nodes.ensureIndex({ name: 1},{unique: true},function(){
[19:41:16] <StephenLynx> {name:1,otherfield:1}
[19:43:25] <cheeser> is there existing data in the collection?
[19:44:58] <DrLester> not when im adding the index
[19:55:27] <DrLester> ok, found it, i needed to do collection.nodes.dropIndex({name:1})