[01:32:04] <pomke> Hey :) I'd like to do an update (upsert) with a match query {_id : myData.id} < Where myData.id may be null (javascript), which should indicate that a new record should be created(upsert rather than update)
[01:32:30] <pomke> What I'm getting is a new object where _id is literally set to null
[01:45:05] <joannac> if myData.id === null, insert, else upsert
[01:45:29] <joannac> the upsert is as you have it now, with a match query {_id : myData.id}
[01:45:54] <pomke> But I kind of expected it would replace the null with a new ObjectID
[01:49:10] <pomke> I'm probably doing something silly.. but I'm receiving an object that may or may not exist in the collection, and I want to: update it if it does or create it if it doesn't. I can check if it has an id, and conditionally call update or insert, I just thought update with upsert somehow wrapped those two into one
[01:50:27] <pomke> Sorry if this is a newbie question :(
[02:00:41] <joannac> upsert says "does a document with the match clause exist? if so, update, otherwise, insert"
[02:01:19] <joannac> the implicit statement there is "add what's in the match part to the document"
[02:01:41] <joannac> whereas what you want is more like if A, then B, otherwise C
[02:01:53] <joannac> where B and C have no relation to A
[02:03:05] <joannac> if you want to do an upsert with query part {_id: null}, that implies you want to either (a) update the document with {_id: null} or (b) insert a document with {_id: null}
[02:03:54] <pomke> So the best approach is to just conditionally call insert?
[02:07:34] <pomke> joannac: thanks for your help btw :)
[02:40:18] <danecando> anybody here familiar with mongoose
[03:54:49] <pomke> If I have a collection of objects with an array of IDs, ie { tags : [ObjectID, ObjectID, ObjectID], ...} And I want to find all matches where the object tags array contains a specific ObjectID
[03:56:06] <pomke> I thought I'd need to do something like: find({ tags : { $elemMatch : SomeObjectID }})
[03:56:43] <pomke> But $elemMatch takes an object, I guess for doing $gt, $lt etc
[03:56:56] <pomke> but I want to match it explicity to one value
[04:01:33] <joannac> what do you want the output to be?
[04:03:10] <pomke> All the objects that have SomeObjectId in their tags array
[04:15:13] <phrozensilver> can someone take a look at my problem real quick? https://gist.github.com/rdallaire/241dd48262ec856620c1
[04:15:50] <phrozensilver> Im trying to post to my mongodb using mongoose and the name data posts fine but I can't get the nested objects to fill for some reaso
[04:16:08] <phrozensilver> wondering if my syntax is possibly wrong?
[08:32:46] <joannac> I don't think there's an operator called "$day"
[08:32:56] <joannac> so I would guess that's the problem
[08:44:24] <rspijker> tadeboro: you probably need $dayOfYear/Month/Week instead...
[08:48:58] <tadeboro> joannac, rspijker: Yep, you're both correct. I took the code from some online tutorials and it was bad.
[08:49:18] <tadeboro> $dayOfWeek is what I was after. Thanks for help.
[09:06:18] <ic3st0rm> i have to work with currencies. all calculations are done with bigdecimal and in the minor unit of the currency. the output is a number without comma. is it save to store this in mongodb as number? 20$ would be stored as 200
[09:08:56] <ic3st0rm> alright, but with mongoose i cant say insert it as integer, it automatically gets inserted as normal number (float)
[09:08:59] <remonvv2> ic3st0rm: If you're asking if it's safe to store any bigdecimal value in mongo's native integer format the answer is no. That said, there are no real world currency amounts that exceed 2^63.
[09:09:06] <remonvv2> So practically, it's probably safe.
[09:09:16] <ic3st0rm> not bigdecimal value but .toString() of bigdecimal
[09:09:33] <remonvv2> Why would you want to do that?
[09:10:59] <ic3st0rm> i need to store orders of my customers.
[09:11:55] <ic3st0rm> when a user buys 2 pieces for 10$ each, i calculate how much he must pay (amount * rate) and save the total
[09:14:47] <ic3st0rm> all calculations are done in cents, and i store the amount, rate and total in cents too.
[09:37:29] <ic3st0rm> yea but bigdecimal outputs in cents so i always have numbers without decimal places
[09:37:37] <remonvv> So you'd be converting to one of those three first (most likely a double) and then store it. Alternatively it's being serialized to a string in which case...well...not sure what $inc does with that. Error probably.
[09:38:01] <remonvv> ic3st0rm: Yes, what I'm saying is that that "output" has to be converted to an integer type by you explicitly.
[10:14:23] <ic3st0rm> i plan a small site where users can buy/sell stuff
[10:24:31] <remonvv> rspijker: Where did you get that? Afaik that's only common for systems that manage funds or transactions and I'm not aware of any global laws for currency storage accuracy beyond that. Also, I think that's decimal places for the major currency unit (e.g. USD, not USD cents).
[10:26:56] <rspijker> remonvv: certainly not global. Fairly sure the local laws here foverning accountancy practices make some requirements about accuracy. And it would indeed be on the major unit (I don’t think I claimed otherwise?) although that wouldn’t really matter conceptually.
[10:30:15] <remonvv> rspijker: I didn't mean you claimed otherwise, was just curious about the requirements. I think the law is more about how transactions themselves are handled (which may affect storage requirements). E.g. if you add one amount to another (or have to multiply for discounts, interests and so on) the operation has to be performed with specific accuracies and rounding rules. Interesting topic though. I (briefly, zzzz) worked for a comp
[10:31:24] <remonvv> rspijker: And it doesn't matter conceptually but it does affect how "valid" int64 is as a currency storage type.
[10:32:24] <rspijker> remonvv: I doubt there is any currency for which you will realistically get to the bounds of int64 ;) Althoug supposedly you could if you have some hyperinflated currency…
[10:32:27] <remonvv> rspijker: I'm not intimately familiar with any laws governing this by the way, it's an interesting topic though.
[10:32:46] <rspijker> and currency like bitcoin which is, theoretically, infinitely dividable...
[10:32:49] <remonvv> rspijker: So if ic3st0rm is making an e-shop for a prince in Zimbabwe he has to worry eh
[10:33:03] <rspijker> that was the hyperinflation country i was thinking of, yes :P
[10:33:46] <rspijker> it is indeed interesting. I work for a company active in the billing/rating/invoicing space and so I deal with it a little
[10:33:55] <rspijker> although the legal/compliancy stuff is handled by someone else
[10:34:59] <remonvv> rspijker: I worked for a few more involved with it and I've researched into HFT systems and bank systems in general but that's about it. Currently we do little more than integrating with payment providers (which incidentally usually deliver amounts in cent accuracy).
[10:36:45] <rspijker> we deal with a couple of telcos. They usually charge something like 5 cents per MiB for instance, but the information we get (CDRs) has quantities in KiB and they do say in the fine print that they charge per KiB (sometimes per 10KiB for some reason). So you get prices of 5/1024 cents
[10:36:48] <remonvv> I'd always advise against BigDecimal/BigInt types in currency handling code and in favor int64. The amount of subtle bugs that can surface when having to move BigDecimal like values between different systems is daunting.
[11:03:40] <Derick> having any luck with data recovery?
[11:04:28] <ernetas> Hi! mongorepair failed, so did mongodump, maybe because of missing .ns file. I'll later try to recover that too, although I'm not sure if that will be possible.
[11:04:52] <ernetas> I'm now trying to use https://github.com/MongoHQ/purplebeard to extract BSON data, but I'm getting "UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte". Crap!
[11:24:17] <ernetas> Looks like there isn't going to be a(n easy) solution?
[12:55:51] <denispshenov> Hi everyone. Could you please look at the following question and see if you can help, thanks. http://stackoverflow.com/questions/25059249/how-to-match-multiple-subdocument-in-mongodb
[13:01:18] <rspijker> denispshenov: well… the two $’s won’t work. Because you have no guarantees that the reader or like is found...
[13:01:49] <rspijker> you can do it with aggregation and a $unwind
[13:39:37] <squish> Hi, I'm just learning how to use mongodb via pymongo, and I have a beginners question: Is there something like the .distinct() method, that returns a list of possible keys from within a collection? distinct() only returns values.
[13:47:24] <rspijker> there is no way to get all existing keys from within a collection
[13:47:36] <rspijker> short of retrieving every document and iterating every single one
[13:47:59] <rspijker> ic3st0rm: you can use $group to get distinct values, but distinct already does that directly
[13:48:01] <Derick> i think stackoverflow has a map/reduce job to do that though
[13:50:51] <squish> Okay. I thought I must be missing something, because this functionality seemed like something that should exist. But if it doesn't I can learn on. Thank you, guys.
[13:52:43] <rspijker> squish: mongo is schema-free.. So the amount of _possible_ keys in a certain collection is nearly unlimited
[13:57:01] <feathersanddown> a user name like "foo_name bar_second_name" and to search for "second" text
[13:57:05] <squish> This would return the keys for a single object. My problem is more, that I have a collection which includes documents with slightly different schemas. Some have a few Keys added, which others don't.
[13:58:09] <rspijker> squish: then it’s more difficult. Still quite easily doable. But you would have to iterate over every document to make sure you get all the keys...
[13:59:00] <rspijker> squish: something like this http://pastie.org/private/5dizeq7tiyuifjms5qwva
[13:59:07] <squish> This was my first idea, but I thought, that there might be some server-side method doing exactly that.
[13:59:14] <rspijker> if your collection is large, it will take a long time...
[15:01:07] <ruairi> feathersanddown: Inside a collection, you have documents. Inside those documents, you have fields. Are you asking if you can create a text index on a field in a document?
[15:02:14] <feathersanddown> fields inside documents that are inside documents
[15:02:39] <feathersanddown> seems text fields inside a document only
[15:03:19] <rspijker> feathersanddown: you need to specify the field(s) to index
[15:03:34] <rspijker> if that’s a subdocument, you can user the dot notation to specify it
[15:03:46] <rspijker> in your case: object_foo.field_to_search
[15:35:55] <rspijker> without knowing your usecase any better than the vagueness you are giving me atm, I can’t tell you whether or not that will work for you
[15:45:38] <flippyhead> Hi! Anyone have suggestions on how to "bundle" mongodb with an OSX Application ?
[15:58:42] <culthero> Hello, can you somehow use a b-tree query on a range to reduce the scanned results on a full text field in the same collection? For instance you have a timestamp, then a text field containing tweets?
[15:59:55] <culthero> When I do an explain just using the btree, it is lightning fast, but if I add in the $text: {$search: "text search here"} }) it scans over all records inside the fulltext index, rather then the subset of records in the index that fall within that range
[16:00:12] <culthero> erm, all the records that contain "text search here"
[16:00:20] <culthero> meaning that no matter what, as the data set grows the search will slow
[16:02:49] <denispshenov> Everyone, if you have time could you please see this question again. http://stackoverflow.com/questions/25059249/how-to-match-multiple-subdocuments-in-mongodb
[16:58:24] <culthero> So before I rebuild a 10gb index, does someone know if a compound index on a fulltext field + a b-tree field (such as a date) will be.. what is the word, selective? (IE, range + text)
[17:43:02] <Almindor> does anyone know how to convert a BsonDocument into a class? It works directly with Cursor<myclass> but I need it to be done one by one due to possible errors
[18:34:36] <cozby> hi guys, I have a simple app that connects to mongodb (via mongoose) and shoves data in it. It inserts quite a bit of records, and I'm seeing a lot of Error: connect ETIMEDOUT when connecting to my mongo instance
[18:34:36] <mikebronner> Books::all() should get you all books
[19:35:02] <jamesaanderson> I have a bikes collection and a rentals collection. My rentals collection has a bike_id and my bikes collection has a user_id. How should I go about finding all rentals with a status of "pending" and bike user_id of "xyz"? Am I using MongoDB too much like a relational DB and if so how should I fix this?
[19:37:28] <stefandxm> i am more curious what bikes you have :)
[19:41:23] <jamesaanderson> Haha just a road bike. With a relational db I could do a join but obviously that isn't possible in MongoDB so I'm thinking I'm going have to redesign my schema somehow
[19:42:44] <stefandxm> dunno. i dont use mongodb for 'that' :)
[19:57:56] <jonyfive> hello, does anyone know how to register for an exam? https://www.mongodb.com/products/training/certification says "Opens Jul 31" but there's no link to click or anywhere i see to register
[20:10:43] <BlakeRG> does Mongo generally perform better on many smaller shards or a single beefy machine
[20:22:33] <pasichnyk> I have a large collection (currently 500gb and about 500M documents but growing), and all documents have an ISODate() property, which i need to do filtering queries on. In this case the ISODate() is an exact date of an event, however when i do my queries, i'm ok with simply daily resolution (i.e., find me events from 2 days ago). I'm scared to add the ISODate() into my index, for fear that
[20:22:33] <pasichnyk> the cardinality of it will bloat the index. Is it suggested to add something with a better cardinality (i.e., just a date without further precision) and index on that instead? That would obvioulsy take more storage space but seems like it might be a better choice? Suggestions appreciated. :)
[20:24:31] <stefandxm> tbh, i dont think you will save much
[20:24:42] <stefandxm> an isodate should easily fit a 64bit integer
[20:38:06] <feathersanddown> Can I search for a subdocument field and then return only subdocuments instances ?
[21:00:37] <pasichnyk> stefandxm, but does the fact that the cardinality is super high (new events coming in constantly) mean that index is going to be super huge?
[21:11:04] <daidoji> pasichnyk: stefandxm is right, an index should easily hold an isodate, however, you are correct in that typically its not a good idea to index by date due to the index b-tree depth increasing with cardinality
[21:11:24] <daidoji> pasichnyk: thus, as you get more events, your index depth grows, which is what you probably don't want
[21:12:25] <pasichnyk> daidoji, yeah, i really just want to be able to go limit down documents i'm looking at by a date or date range, so it doesn't have to scan through .5TB (and growing) of documents everytime i need to find something that i know the date of.
[21:13:05] <daidoji> pasichnyk: then its probably best to index by a key that's a truncated version of whatever the most common type of date range you're looking for, but you can keep it in ISODATE format
[21:13:16] <pasichnyk> daidoji, because with the precision of ISODate() basically all of my documents will have unique datetimes... :/