[00:43:18] <Boomtime> "official" you say.. can you provide the link?
[00:45:21] <Boomtime> here is the entirety of the "official" reference to docker i can find from mongodb: https://docs.mongodb.com/manual/administration/production-notes/#hardware-considerations
[00:46:28] <kexmex> just looking at Dockerfile and stuff
[00:46:29] <Boomtime> right, but mongodb has no control over what other people/groups/organizations choose to do with the community build of mongodb - including repackaging it
[00:47:00] <Boomtime> and there a lot of them - someone here might still be able to help you, but you'd get a better response probably from a docker forum
[00:47:42] <kexmex> "Docker links automatically propagate exposed ports of one container as shell variables to another container. In this way, the second container can dynamically adjust network settings upon startup without the need to modify an image nor configurations."
[03:22:55] <jasvir> but it says illegal start of expression. ';' expected
[03:23:27] <jasvir> can anyone please tell me how to use geaonear query in db.runCommand() ?
[03:37:52] <Boomtime> @jasvir: what you pasted was mongo shell syntax - you'll need to check the java docs to see how to pass that to the java driver
[03:52:59] <jasvir> Boomtime: Unable to find any solution.
[03:56:36] <jasvir> I tried this: http://api.mongodb.com/java/3.2/ . There is runCommand option available
[04:10:50] <Boomtime> jasvir: what you posted is not even remotely valid java syntax, which would be obvious to anyone who knows java - i feel like trying to help you is going to start at the "java 101" level - do you actually know java?
[04:11:33] <Boomtime> if you don't you should either learn java first, or just write shell scripts if you can get away with it
[04:19:30] <jasvir> Boomtime: If I doing something, it's obvious that I know that much java to run queries. It is also understood that I have not directly started working on geonear. If you don't know the answer, it's fine. Don't try to show off or something. I don't need your advice about what to learn and what not to. Checking docs is valid answer to those queries w
[04:19:31] <jasvir> here you know that answer is there in docs. If don't know, don't advice that
[04:25:35] <Boomtime> so given you're good to go with java, what you need to do is convert the command into a bson object containing the command at the top-level and the parameters embedded in the same object - basically a straight conversion of the JSON object you pasted
[04:26:33] <Boomtime> that object instance can be given directly to the command method - and you'll get a bsonobject back in the reply, note it's actually cursor but the driver might not know that, you'll need to map it yourself
[05:04:37] <Lonesoldier728> Hey guys having difficulty figuring out which is the best way to grab data as a user scrolls further down
[05:05:07] <Lonesoldier728> http://stackoverflow.com/questions/37084646/using-mongodb-to-send-back-paginated-data I included my code there if anyone has an idea how I can go about using offset without receiving duplicates
[05:14:57] <Boomtime> @Lonesoldier728: don't use skip(), it doesn't work the way you hope it does
[05:15:16] <Lonesoldier728> What would be the best way then
[05:15:24] <Lonesoldier728> to paginate it in a way
[05:15:26] <Boomtime> the reason first of all -> https://docs.mongodb.com/manual/reference/method/cursor.skip/
[05:15:35] <Boomtime> "The cursor.skip() method is often expensive because it requires the server to walk from the beginning ..."
[05:16:46] <Boomtime> so why not add predicates that uniquely identify the item to find first (and everything else be "greater" than)? the sort will take care of the rest
[05:16:50] <Lonesoldier728> so instead of offset use the last item as the starting point? B
[05:17:05] <Lonesoldier728> the problem there is that a lot of them are on the same level
[05:17:23] <Boomtime> hence the 'uniquely' requirement
[05:17:25] <Lonesoldier728> so there is no greater or less than - maybe alphabetically
[05:17:36] <Lonesoldier728> plus the view count but two fields for sorting is that an issue?
[05:18:00] <Lonesoldier728> essentially it will sort all the docs based on count then again on alphabetical while keeping the count in tact
[05:18:32] <Boomtime> sort on two fields is fine - it has the same performance requirements - make sure the fields are in the index in the same order to assist
[05:18:35] <Lonesoldier728> Or should I just add another field specifically for cataloging the position of the items and update that catalog every day or so
[05:19:35] <Boomtime> if all else fails, you can always add the _id as the final field to the index and sort on that too - it guarantees a unique starting position
[05:20:14] <Boomtime> but if you can do it without that give that a go first - use what you have as much as possible, every field added costs memory, etc
[05:21:05] <Boomtime> i should write an article on efficient pagination, it comes up often
[05:21:29] <Boomtime> there is a *much* simpler method though, if you can do it; keep the cursor
[05:24:05] <Lonesoldier728> Yeah I saw keeping the cursor, the problem is a person can load up items 10 minutes or even an hour or so later possibly, so the cursor cannot stay open that long
[05:24:37] <Boomtime> no worries, you always have _id
[05:24:53] <Lonesoldier728> but is the _id in order? The one mongodb creates?
[05:25:03] <Boomtime> yes, but that doesn't actually matter
[05:25:19] <Boomtime> what matters is that it is unique
[05:25:30] <Boomtime> you already have a preferred sort order right?
[05:25:40] <Lonesoldier728> How would I sort by the id and the view count like this --- .sort({viewCount: -1, _id: 1})
[05:27:03] <Boomtime> i mean, why is it a problem to index?
[05:27:20] <Lonesoldier728> That is another reason why duplicates might show up which might be an issue, since by the time a user requests to load more the viewCount could have gone up a few positioning it differently in the sort
[05:27:34] <Boomtime> yes, but that would be the correct result yes?
[05:28:07] <Boomtime> unless you save a snapshot of the database at the time they make the first request, you cannot possibly preserve the order of the following pages
[05:28:22] <Lonesoldier728> right so my solution was to have a fake order
[05:28:34] <Lonesoldier728> by giving them positions
[05:28:41] <Boomtime> so you don't want to sort by viewCount?
[05:28:41] <Lonesoldier728> in a new field that iwll update daily
[05:29:39] <Boomtime> you can sort by whatever you like, i'm not the one with the requirements - i'm only suggesting _id after whatever you sort by as a way of unambiguously deconflicting duplicates
[05:30:04] <Boomtime> if you have a method that can assure unique positions, then you don't need the _id suggestion
[05:30:12] <Lonesoldier728> To show the most popular items first and being able to grab more data as a user scrolls down
[05:30:55] <Boomtime> a script that runs once a day and assigns a sort order position to every entry would certainly work - it just seems like bringing a bazooka to a poker game
[05:31:00] <Lonesoldier728> the only issue is that the popular items are changing constantly - so to solve that and avoid duplicates I would have to add another field that identifies it with todays
[05:31:19] <Lonesoldier728> Well if you are playing poker with Iran you might just have to : )
[05:32:01] <Lonesoldier728> And you can take that in either way, whether because they will to bring it, or because you probably can put it up as a bet and they will count it as money :)
[05:34:44] <Boomtime> anyway, you have ideas, several i think, what works for you will depend on precisely the behavior you want and how the implementation will perform, etc. good luck
[05:35:22] <Lonesoldier728> thanks, should I index the explorePosition field - is it necessary because I know indexing takes space too
[05:35:36] <Lonesoldier728> and if I change it each day - it will have to re-index no?
[05:36:54] <Boomtime> if you change a field value the index is updated as part of the document update - these two things are atomically performed
[06:42:10] <jasvir> Hey there. I am having two fields in my document which are longitude and latitude. I want to create 2d index out of that. Can anyone please tell that how can I do that.
[10:33:48] <CreativeWolf> I'm trying to get mongodb working on Pine64 and running in to trouble
[10:34:34] <CreativeWolf> Getting an error message "2016-05-07T15:56:10.575+0530 I STORAGE [initandlisten] exception in initAndListen: 18656 Cannot start server with an unknown storage engine: mmapv1, terminating"
[10:35:35] <CreativeWolf> Googled for about 20+ hours and couldn't get anything that helps
[10:35:46] <CreativeWolf> Can someone help please?
[10:57:30] <hit_> Hi I have a lot of documents and they all have a field, the values for that field are only a small set of about 1000 values. I want to clean all the documents and leave only one for each value of that field. How can I do that?
[13:27:14] <zerOnepal> Hi there, I am adding new replica set to my current pool, currently there are 3 nodes, I am adding on secondary and one aribter... total of 5...
[13:27:34] <zerOnepal> my question is, how to I force added new member not to be able to be elected as master
[13:38:26] <CreativeWolf> I'm trying to get mongodb working on Pine64 and running in to trouble
[13:38:28] <CreativeWolf> Getting an error message "2016-05-07T15:56:10.575+0530 I STORAGE [initandlisten] exception in initAndListen: 18656 Cannot start server with an unknown storage engine: mmapv1, terminating"
[13:38:32] <CreativeWolf> Can someone help please?
[14:24:44] <logikos> is it possible to have mongodb store the files under each users account ... can i control where files are stored via php? .. this would make backup of the db files with the account easier
[14:25:42] <StephenLynx> that sounds like a bad idea.
[14:28:24] <StephenLynx> and your application uses the system accounts?
[14:28:40] <StephenLynx> why not have them to be users on the database itself?
[14:31:51] <logikos> I have little to no experience with mongo .. just had an idea and though it may be a good fit .. but basally I write php applications that can be used by one or more clients ... was going to make a blog app that would plug into their existing frameworks .. to use mysql i would have to setup a database, a user, and create tables before the app is usable via php
[14:32:03] <logikos> my goal was to build an app using mongo that did not require such setup ahead of time
[14:34:40] <logikos> i've only ever messed with mysql and sqlite .. so i dont know the difference
[14:35:06] <StephenLynx> the database files are the binaries representing the database
[14:35:16] <StephenLynx> gridfs is an abstraction for files stored on mongo.
[14:35:21] <logikos> with sqlite .. i can put the db file in the local account, commit and push it up to a private repo .. pull it down to a different server and it runs with the db in tact
[14:37:07] <StephenLynx> mongo is complex and powerful.
[14:37:21] <StephenLynx> unlike sqlite that is simple but limited.
[14:38:17] <logikos> I've recently started using PhalconPHP which has good docs and an ODM for using mongo and after reading about the ODM it seemed worth a try
[14:38:21] <logikos> i've not used nosql before though
[14:38:52] <logikos> i just need to take the time to actually learn how the thing works....
[14:39:07] <StephenLynx> id also suggest trying node.js since you are using mongo.
[14:39:16] <StephenLynx> handling json objects is much more intuitive on it.
[14:41:03] <logikos> thanks .. but sadly I currently lack the time required to read up on it all and I dont really need anything that complex, I can use sqlite for this, i just thought if it were simple it would be a good opportunity to learn mongo for this
[14:41:20] <logikos> and figured a blog would be a perfect use case for a document based system rather than a relational database
[14:41:35] <logikos> but i lack to much understanding
[14:42:34] <logikos> what is the best usecase you can think of for mongo
[14:43:14] <StephenLynx> something that isn`t heavy on relations and requires to handle a large amount of data
[14:43:26] <StephenLynx> and that doesn`t have a strict schema
[14:43:32] <logikos> nosql has been on my list to learn for some time now, i have done some reading about it .. some people hate it and say never use it for anything .. others seem to think it is the best fit for everything .. the most accurate answer is to use it when you have non relational data .. but i realy cant think of anything that has no relations
[14:43:37] <StephenLynx> something with many 1-n relations
[14:43:45] <StephenLynx> nosql is a bad term, btw.
[14:50:23] <CreativeWolf> I'm trying to get mongodb working on Pine64 and running in to trouble
[14:50:26] <CreativeWolf> Getting an error message "2016-05-07T15:56:10.575+0530 I STORAGE [initandlisten] exception in initAndListen: 18656 Cannot start server with an unknown storage engine: mmapv1, terminating"
[14:50:28] <CreativeWolf> Can someone help please?
[14:50:41] <StephenLynx> i could give my own project as an example.
[14:50:53] <StephenLynx> but I don`t know if you are familiar what the kind of software it is
[14:53:31] <StephenLynx> instead of restarting with a different database
[14:53:37] <logikos> yeah... i would need a project that would be useful to me to build
[14:54:26] <StephenLynx> I already told you about its strengths: large datasets, 1-n relations, not reliant on n-n relations
[14:55:56] <StephenLynx> first you have to learn the most basic thing about it:
[14:56:02] <StephenLynx> it doesn`t implement relations.
[14:56:22] <StephenLynx> any relation has to be resolved on application code.
[14:56:37] <StephenLynx> even dbrefs are solved on your application code by the driver.
[14:56:53] <StephenLynx> so lets say that your software requires you to join a fuckload of tables
[14:56:59] <StephenLynx> that would be awful on mongo
[14:57:17] <StephenLynx> because you would have to individually read the data from all collections and perform a join on your software
[14:57:42] <StephenLynx> also because of that, relations are not validated, since foreign keys don`t exist.
[14:58:05] <StephenLynx> even with dbrefs, if you relate a field on a different collection, if said relation is invalid, mongo won`t bother checking for it.
[14:58:29] <StephenLynx> this is a core characteristic and is up to you to understand how it impacts your project.
[15:07:52] <logikos> but everything is relational .. products, orders, customers, inventory, purchasing, accounting, job tracking ..
[15:16:20] <logikos> but suppose some day I get a client who's problem is better solved with mongo or some other document based database system .. i'd rather have some experience with it to know for sure
[15:16:25] <StephenLynx> I am just talking about giving you molds
[15:16:33] <StephenLynx> its hard to describe molds for this
[15:20:25] <logikos> where tables grow vertically rather than horizontally .. kinda
[15:21:08] <logikos> there are so many product attributes, and many of which wont be used by most products that to have a table contain them all horizontally is not realistic
[15:21:16] <logikos> so instead you have a table of attributes
[15:21:26] <logikos> and another table relating products attributes and values
[15:22:48] <kurushiyama> aep: Cologne, if that helps
[15:25:09] <logikos> it lets you store the data, and you can select the data to output for a product .. but it makes many other queries difficult
[15:25:36] <logikos> because to get a products information it is spread accross many rows
[15:26:27] <kurushiyama> logikos: For MongoDB that does not make much sense. Either you'd simply have the attributes in some docs, and some other in other docs, or you could model an array of subdocs, like {attributes:[name:"power",value:"moa"]}
[15:26:39] <logikos> so if you think of the products on amazon .. how you can select a category and then filter based on various attributes
[15:29:10] <kurushiyama> logikos: You'd have to sanitize the input... As always?
[15:29:16] <logikos> well say i'm adding a new product .. and i go to list the attributes for the product
[15:29:37] <logikos> either there needs to be a table with a pool of attribute options or i enter them free form
[15:30:05] <logikos> if they are entered free form .. for one product cordlength=6 and in another cord-length-6
[15:30:40] <kurushiyama> logikos: For fixed attributes, you'd model differently.
[15:30:52] <kurushiyama> logikos: It really depends on what you _want_
[15:31:40] <logikos> well that is the thing, they arr kinda fixed .. in that the same key will be reused for many products but also kinda not fixed because this new product I want to enter has a notable attribute that hasnt been used before .. or that has been used but under a name I can not guess
[15:31:49] <thezanke> o7 Hello #mongodb - quick question.. I'm just ramping up on mongo/mongoose. When I run a `Model.create(stmt, (err, model) => { // handle response });` What's the proper way to only "select" the fields I need from the return object? I don't want to reply with every field in the create response.
[15:31:57] <logikos> and the attribute key list would have to be 10000+ long
[15:31:57] <kurushiyama> logikos: Last time I had a look at amazon, they had whole duplicate trees, let alone the same tag in different spellings.
[15:32:43] <logikos> if they have the same tag with different spellings then that suggests it is free form then?
[15:33:12] <kurushiyama> logikos: I am pretty sure about that. But if you want it fixed: np.
[15:33:21] <kurushiyama> logikos: You have different way to do it
[15:33:37] <logikos> for that many different product types and that many different kinds of attributes to make it fixed would be difficult...
[15:33:43] <logikos> but making it freeform has its draw backs also
[15:34:03] <logikos> i realize there are different ways to do it .. i'm trying to think of what would be the best way
[15:34:20] <logikos> wish amazon.com wa sopen source lol
[15:34:28] <kurushiyama> logikos: What I would do is to display the list of available attributes for a product when the user enters it, and then save them either as an array of subdocs or a simple array, if the attributes are no k/v pairs.
[15:35:09] <kurushiyama> logikos: Yes, that is redundant, but saves you quite some queries.
[15:36:38] <kurushiyama> logikos: But imho, you only shift the problem of having unique attributes from one place to the other, and potentially you limit the user or – worse – the usefulness of the application if you are to strict.
[15:39:59] <kurushiyama> logikos: Probably the best way to deal with it is to have some sort of autocomplete for attribute names. Or, more genericly described, to display the existing attributes in one way or the other to choose from. If the user really feels like creating a new one, let him.
[17:15:21] <bruceliu> but other machine worked fine
[17:15:49] <kurushiyama> bruceliu: From which version to which did you update? Did you change the storage engine? For which query did timeAcquiringMicros rise? A bit more info please.
[17:23:16] <kurushiyama> bruceliu: I really can not help you without seeing anything. You refuse to give the update, you refuse to give the results of explain. So HOW should I find out what is going on?
[19:52:30] <zsoc> Under what circumstances would mongo ignore a fields unique: true attribute and allow a duplicate entry?
[19:53:03] <WarAndGeese> So can I safely avoid data loss in mongodb by trying to attempt a sort of double- or triple-entry accounting system, and then running checks to make sure things like up?
[19:53:09] <WarAndGeese> How often does mongodb lose data exactly?
[19:54:02] <WarAndGeese> Like if rather than saving documents when stuff happens, I can save a document when something is attempted, save another when it's completed, and then run checks to make sure everything adds up, does that make sense?
[19:55:09] <zsoc> Oh It's a mongoose bug. I guess I shouldn't be surprised.
[19:55:19] <WarAndGeese> Or I can even save things twice, that way if one document gets lost I would have the other, and then run checks and find out when there are inconsistencies. Does that solve data loss issues?
[20:18:49] <zsoc> When I'm in a callback of find, and I have an array of documents (doc), can I do something like var first = doc[0]; first.field = 'foo'; first.save(); ? I feel like I should be able to do this.
[20:23:11] <zsoc> Yep, I can. I just can't if my var isn't in scope cos i'm a dolt. Alright coo
[21:02:53] <Lonesoldier728> Hey no idea why I am getting this error MongoError: invalid operator: $search on a full text search am I doing something wrong with my query http://stackoverflow.com/questions/37093606/how-to-conduct-a-text-search-in-mongodb
[21:13:44] <zsoc> Lonesoldier728, what is the $search operator?
[21:20:42] <zsoc> Well apparently the way you're using it doesn't quite work as expected lol.
[21:20:55] <zsoc> You're right tho, I had the docs backwards.
[21:21:20] <Lonesoldier728> Right and that is what I am trying to figure out why! )
[21:22:11] <zsoc> I like to blame Mongoose, that's the bane of most of my issues. Maybe the mongoose middleware doesn't like something about $search.
[21:23:49] <zsoc> You might would save yourself some trouble to run db.serverStatus()['version'] in console and make sure there aren't multiple versions hopping around your env
[21:24:34] <zsoc> For instance, the newest version of Mongo on OpenShift is 2.4, as ridiculous as it sounds.
[21:30:06] <Lonesoldier728> it is a mongo error not mongoose
[22:06:00] <zsoc> alternatively, realize why the vast majority of that stuff is based on people trying to use mongo like it's sql
[22:06:24] <kurushiyama> WarAndGeese: Most of the people complaining did the following: NOT reading the docs.
[22:06:38] <WarAndGeese> zsoc: I'm trying to figure out how credible that stuff is though, if I can use mongo for what I want then I will use it, it would be a big deal for me to have to switch
[22:06:40] <kurushiyama> WarAndGeese: Here is what happens when you set a writeConcern > 1
[22:06:55] <joannac> Lonesoldier728: does it work in the mongo shell, connected to the same mongod?
[22:07:15] <WarAndGeese> Or not just with it being credible, but if it's relevant and if I can get around any issues for my project
[22:07:19] <kurushiyama> WarAndGeese: A write operation will only return when the data has been applied to at least the number of writeConcern Hosts
[22:07:26] <Lonesoldier728> querying it directly? joannac hm let me try
[22:07:42] <zsoc> Realistically, no one would use a db if it randomly lost any amount of data when properly used.
[22:09:33] <Lonesoldier728> The version is this - MongoDB shell version: 3.0.4
[22:09:44] <WarAndGeese> zsoc: Yeah that's what I was thinking, so I was confused by all flak it was getting, unless I misunderstood what it was for and if it was strictly for cases where you don't need every point, like if you're aggregating a huge amount of information and no individual piece mattered. It's just that I know it's used for more than that and it's throwing me off.
[22:09:46] <joannac> Lonesoldier728: please type what I asked you to type
[22:09:58] <joannac> Lonesoldier728: that is not the output you get from db.version()
[22:10:06] <Lonesoldier728> the indexes here http://pastebin.com/BBHN3X9W
[22:10:11] <joannac> Lonesoldier728: that is your shell version, which is not the same
[22:10:32] <zsoc> WarAndGeese, people complain because they embed documents without understanding when they shouldn't be embedding documents and then 10 million rows later they run into an issue.
[22:10:49] <kurushiyama> WarAndGeese: Even with a write concern = 1, an operation returns only when the write made it to the primary. There are _rare_ edge cases (namely when a write made it to the primary, but none of the secondaries and a failover happens while this is still true) in which data is rolled back. It is not lost, but manual intervention is needed. With a writeConcern of > 1, you are on a pretty safe side, with a write concern of majo
[22:10:49] <kurushiyama> rity and a replica set size (standard nodes) of 5, it is virtually impossible to loose data.
[22:11:25] <WarAndGeese> kurushiyama: That makes sense, so then if the writeconcern is high and if I have multiple servers then data shouldn't even get lost, like it would have to coincidentially get lost on multiple instances, am I understanding it right?
[22:11:33] <joannac> Lonesoldier728: okay, so you have 2 pronblems. You're on 2.4 which has different text search syntax, and it doesn't look like you have a text index :)
[22:11:58] <kurushiyama> WarAndGeese: you would need to loose all instances on which the write is done _simultaneously_
[22:12:33] <kurushiyama> WarAndGeese: The probability of a fire destroying the datacenter is probably much higher.
[22:12:34] <Lonesoldier728> Yeah the text search index seems to be missing how do I add it? I was doing this in mongoose... Tag.index({name: 'text'});
[22:13:02] <Lonesoldier728> For the query - this should work then - db.tags.find("text", { search: "tech" } ); but yeah the text is undefined - which leaves to the index problem
[22:13:30] <zsoc> index stuffs can only be defined on collection creation, iirc
[22:13:35] <WarAndGeese> So when a document is added or updated or deleted, does it try to make that change in multiple servers, and then if it fails on one or two, does it synchronize after?
[22:14:02] <WarAndGeese> I'm still learning so bear with me if I have noobish questions
[22:14:31] <joannac> Lonesoldier728: did you do that?
[22:14:37] <kurushiyama> WarAndGeese: It is done on one server (the primary) and then put into an operation log (oplog for short) which is pulled (in the sense of a "tail -f" you know from linux) by the secondaries.
[22:15:23] <Lonesoldier728> nope - but is there a way I can upgrade the mongodb version before I continue with 2.4 - or is it because I am using a sandbox on heroku that I probably will not have that kind of permission
[22:15:27] <kurushiyama> WarAndGeese: nuts and bolts aside, but that's mainly how it works.
[22:15:42] <WarAndGeese> The other thing is with ACID compliance, people are saying you can't make changes to multiple documents safely. Like if I have multiple accounts with points in them, and I want to subtract 5 points from one account and add 5 points to another account, is that done reliably as long as I follow proper procedure, or is mongodb just not for that use case?
[22:15:57] <anamok> I have a mongo script with an aggregate. I want to see everything but the result is truncated to 20 lines. I run it with "mongo < script.js". How to have the full output?
[22:15:57] <joannac> Lonesoldier728: no idea. try it and see
[22:16:17] <kurushiyama> WarAndGeese: That is correct. The mistake here is the assumption that it should work like an RDBMS. Let me dig sth up for you.
[22:16:22] <joannac> anamok: what do you mean, truncated?
[22:16:43] <anamok> it says "Type "it" for more", but I'm not in interactive mode
[22:18:19] <kurushiyama> WarAndGeese: tl;dr : With proper data modeling, you can achieve _a lot_. Since changes to documents are atomic, you can basically record the change and do an aggregation afterwards, when you need the value for a given point in time
[22:18:37] <WarAndGeese> Thanks kurushiyama, I've come across that thread before but wasn't sure what to trust as I was still learning
[22:19:17] <joannac> anamok: um what? are you talking about a find() ?
[22:19:27] <joannac> oh, you're not talking about 2.4
[22:19:45] <kurushiyama> WarAndGeese: Disclaimer: I am the author of the answer I linked – so it is safe to call me biased ;)
[22:20:15] <anamok> @joannac, I have a db.coll.aggregate([...]) in the script
[22:20:18] <WarAndGeese> Haha, I didn't notice, I think it makes sense though
[22:24:42] <anamok> @joannac, var cursor = ...aggregate(...); cursor.forEach(function (doc) { print(doc); }); displays [object BSON] for every doc
[22:25:57] <Lonesoldier728> thanks joannac at least I am in the right direction now, I do not think I can make use of it not - Error: db.upgradeCheckAllDBs() can only be run from the admin database and db.adminCommand( { setParameter : 1, textSearchEnabled : true } ) - gives me an unauthorized message
[22:26:52] <kurushiyama> WarAndGeese: Anything else that worries you?
[22:27:26] <Lonesoldier728> Am I doing that correctly joannac - the way I am enabling textSearch
[22:29:28] <anamok> @kurushiyama, thanks, works now
[22:30:25] <WarAndGeese> I don't think so. My thought process was that I'd have to find a way around the ACID-compliance-related complaints, and the solution I was thinking was like the answer you had on stackexchange. But then other people were saying that you can lose data, and if that happened then I'd be in trouble (e.g. I can have a 'transaction' record that records taking 5 points from user A and adding 5 points to user B,
[22:30:25] <WarAndGeese> but then if data loss is a thing and I lose that document I'm screwed). If the data loss thing was mostly due to a misunderstanding of write concerns and how to properly store redundant data, and if I can reliably/safely get past that by just following proper prodecure, then I shouldn't lose important data and therefore I should be okay, at least I think.
[22:31:46] <kurushiyama> WarAndGeese: Your analysis seems to be about right. Just take care with the data modeling. If in doubt, ask here ;) May I ask which language you will be using for the project?
[22:32:14] <WarAndGeese> I'm also thinking about stuff like saving extra records than I need to, e.g. add a document for attempting to take 5 points from user A and add 5 points to user B, and then add another document when it succeeds, and then run tests to make sure things add up
[22:33:03] <WarAndGeese> kurushiyama: I'm using the Meteor framework, which is all javascript. The only database they support is Mongo, so I assumed it can do everything I need, and I was thrown off reading all the articles claiming it's not reliable or whatever.
[22:33:22] <kurushiyama> WarAndGeese: That is called "2 phase commit". For just pushing points from A to B, it might be using a sledgehammer to crack a nut, but that depends on your use case.
[22:36:43] <WarAndGeese> What if I eventually want to transfer money in the same way? Can it be done safely as long as I do it right? Basically I want to start one thing and if I get any users and if Mongo isn't the right choice then I can start rewriting it in another framework (if I have to), but if Mongodb can do everything safely (as long as I follow procedure) then I wouldn't have to switch. But I don't want to redo a whole
[22:36:43] <WarAndGeese> application before even trying to see if users like it.
[22:37:25] <kurushiyama> WarAndGeese: Aside from the fact that some of those articles simply seem to be clickbait, most of them display a shocking lack of knowledge about the persistence layer they freely chose. The best advice I can give you and as trivial as it may sound: Read the docs. Read how replication works. Read how sharding works. And yes, you can do such things in MongoDB, if you do it right.
[22:38:34] <WarAndGeese> kurushiyama: Cool, this is reassuring. I have been reading the docs but I will keep reading :)
[22:39:43] <joannac> Lonesoldier728: "unauthorised" suggests you need to db.auth(...) first?
[22:39:47] <kurushiyama> WarAndGeese: The example in the answer is obviously a bit oversimplified (as you might have guessed already), since it does not cover some edge cases. But with a 2 phase commit, you'd be on the safe side.
[22:42:12] <Lonesoldier728> yeah joannac - that is not the problem, it is that I am using a sandbox version on compose.io and I apparently do not have authorization for that
[22:42:21] <Lonesoldier728> even with db.auth(my credentials)
[22:42:39] <kurushiyama> WarAndGeese: You are welcome. To explain those articles a bit more: Quite some of them are from a time when the default writeConcern was quite loose. So people were going with the default one (albeit not matching their durability needs) and complained without investigating the issue any further.
[22:48:44] <joannac> Lonesoldier728: oh. well then, i guess you have to upgrade MongoDB then?
[22:49:01] <joannac> text search is on by default in 2.6+ I think
[22:49:08] <Lonesoldier728> apparently cannot do that either
[22:49:20] <Lonesoldier728> let me see if I switch over to mongolab maybe things will be better
[22:55:04] <Lonesoldier728> ok looks like I am moving to mongoLab - they offer version 3.0.9
[23:04:17] <Lonesoldier728> joannac - I am going to refresh everything so before I start adding all the tags - are you familiar with mongoose and if so is this how I would index it? Tag.index({name: 'text'});