[01:02:48] <frodo_baggins> If not, I guess I'll just do a find. :D
[01:03:35] <frodo_baggins> But, it seemed that previously, I was able to have the result callback parameter be set to the document that was being updated, the complete document, so, if there's an option I can set to have that functionality, that would be awesome.
[01:03:51] <frodo_baggins> In the meantime, it seems that result is returning just a number.
[01:04:12] <frodo_baggins> I swear it returned a complete document before.
[01:07:31] <frodo_baggins> Ah, it appears I should use findAndModify.
[01:10:47] <moleWork> mongodb driver is so so so weird
[01:12:27] <moleWork> it gives me an error on an insert {"err":"connection to [servername:27017] timed out"}
[01:12:27] <moleWork> so I've added retrylogic thinking the writeconcern meant that it didn't write to disk..... but it did write to disk cause the retry logic is getting duplicate key errors now so i'm not sure how to tell if something was written to disk or not... it most certainly isn't if(!err) on the insert callback
[01:23:19] <moleWork> Disconnects can happen from time to time even without an aggressive firewall setup. Before you get into production you want to be sure to handle them correctly.
[01:23:32] <moleWork> not sure how to handle them correctly is what my problem is
[01:24:05] <moleWork> i'm guessing at logic as to what happenned when i get a timeout error
[01:27:00] <frodo_baggins> Can I not use $inc and $setOnInsert in the same query?
[01:27:47] <frodo_baggins> I have $inc incrementing a field called hitCount, and I have $setOnInsert initializing hitCount to 0 if it's not inserted.
[01:28:05] <frodo_baggins> Or I should say, if it doesn't exist.
[02:01:47] <moleWork> this can't be the right logic for all situations but fixed my problem... basically result is not set, i get an err.err: " [servername:27017] timed out" but doc._id is set and the document is indeed written to the database
[03:15:41] <stratos13_> hey guys having a little problem here... I'm calling mongoDB with and the data I get back is like this: { year: 2005, VIN: 18048124, make: 'Toyota', model: 'Corolla', price: 123, ... etc
[03:16:09] <stratos13_> how do I get it to omit the ' (apostrophe) for the make and model variables
[07:22:14] <sobersabre> I'm using python client to connect to a mongodb.
[07:22:57] <sobersabre> when I'm dropping a collection, do all its indexes get dropped as well, or do I have to run collection.drop_indexes() before dropping the collection ?
[09:32:04] <ranman> sobersabre the indexes are dropped as well
[14:01:03] <iphands> hey, I am starting a project where I hope to use mongo -> sleepy/drowsy -> AngularJS ... but I have some questions about where document level ACLs should be implemented
[14:01:14] <iphands> can anyone here offer advice regarding this?
[14:01:54] <iphands> I suspect that I want to have mongo -> sleepy/drowsy -> some light layer to enforce document query restriction -> angular
[14:02:48] <iphands> but I wonder if sleepy or drowsy have already solved that by letting you "plug in" your own ACL "business logic"
[14:02:55] <Nodex> you will need to do this in yur app
[14:07:01] <iphands> Nodex: so you would put an application in between sleepy or drowsy
[14:08:38] <Nodex> I don't know what they are sorry
[15:00:53] <Froobly> hi, i'm using another nosql db that has no way of renaming indexes within documents. i'm trying to rename a slug index for nicer routing. in one location i have a flat list of large objects in another document. when "renaming", i copy to the new index, and i create an alias { aliasOf: 'new-slug' } at the old one so that routing forwards with the old url. now rather than copy/delete the big tree of data, i was thinking about creat
[15:00:53] <Froobly> ing aliases for these in reverse, so that the new slug points to a new tree which is really an alias to the old tree. does anyone foresee any problems with this approach?
[15:48:04] <hallas> I have an aggregate pipeline that works awesomely. But! I would like to know if I can somehow combine the query it self an the explain option without running the query twice.
[15:48:34] <hallas> I wonna have the resulting documents, but also know how much I could get potentially (I have a $limit and $skip in my query)
[15:48:55] <hallas> So I can have pagination but also know how many pages there are in total. Can I do this without two seperate queries?
[16:21:39] <q85> hallas: no, adding explain to the query returns the explain document (even though it executes the query to generate the explain).
[20:26:22] <daidoji> cheeser: :-( fair enough, is there a way to force a BSON type when I load? or do I just magic my way into the db ie) field + 0.0 or something?
[20:34:31] <garietyxxx> Hi, #mongodb, I'm building an app with multiple models with structure: Post, Photo, Event, etc.
[20:35:06] <garietyxxx> But I can't wrap my head around wether mongodb will work. Cause what if I want to query all those models form the database at once? I'd have to do three separate queries.
[20:35:18] <garietyxxx> So would Postgres work better and I should do a JOINS?
[20:35:24] <cheeser> unless you embed some documents in others
[20:48:24] <daidoji> garietyxxx: in mongodb your best bet is to denormalize (ie embed documents within eachother) if you have a data set with a bunch of joins you're better off with postgres probably
[20:48:41] <daidoji> unless you have a Lot with a capital L of joins in which case maybe a graph database would be best
[20:50:28] <daidoji> after that its just personal preference
[21:24:18] <daidoji> q85: its pretty simple, it passes if I pass it only like 2 documents with a { query } filter to mapReduce, but fails on the full dataset
[21:27:22] <q85> it'll be a minute before I can get to it.
[21:43:15] <moleWork> is there any good examples of a node.js application that uses mongodb that is "production quality". Soon as I start load testing my app everything gets ugly really quick
[21:44:46] <moleWork> i think most are my problems with node.js monogo db driver and not mongodb itself
[21:48:03] <moleWork> it seems to me like you need a lot of error handling logic for every database operation because you can get a lot of random errors when the driver is in certain states
[21:50:06] <moleWork> it it doesn't seem to me like you can use the internal _state objects of the mongoclient or the db objects reliably
[23:33:47] <larkith> I added a new member to a rs today and now I'm getting "not master or secondary; cannot currently read from this replSet member" every hour from the app