[02:07:53] <a|3xxx> i mean it returned ok but it didn't solve my problem
[02:08:23] <LouisT> remember, you have to put mongodb in no auth, then connect with "mongo", then run: db.getSiblingDB("admin").runCommand({authSchemaUpgrade: 1 });
[05:09:45] <deanclkclk_> I have 2 mongod instances running on seperate vms
[05:09:58] <deanclkclk_> one of my mongod instances have data
[05:10:14] <deanclkclk_> how can I do a dump and restore it on the second db?
[08:02:54] <navneet894> Hello, Iam Navneet Mittal from IIT JODHPUR , i know c++ and want to contribute to mongodb. Iam new to open source and need guidance.
[08:10:55] <Bilge> What do you want to contribute? Aids?
[08:16:02] <navneet894> i just want to give a start to open source contribution? want to know what i can contribute?
[12:16:41] <mn3monic> hello, about db.collection.insert({'sample_key': 'sample_value'}), how I specify that 'sample_key' must be unique without quering at every single insertion to check if it's already in the db?
[12:22:28] <Derick> mn3monic: you can set a unique key on sample_key
[12:23:08] <brammator> from collection {name, id, lastupdate}, and list of names plus timestamp. Could I have three results: "names not in collection; names in collection but lastupdate < timestamp; names not in collection at all" in one request?
[12:23:59] <brammator> Or I shoud make two requests and make third list in my script?
[12:29:20] <dawik> mn3monic: i believe if you try to insert something with the same key
[15:11:16] <brammator> I have to cache some API requests. What's better: use compound key{host, path, params} or simple key with (host,key,params) converted to string?
[15:47:07] <DarkLinkXXXX> Can mongoimport work with just any json file, or does it require something mongo-specific?
[15:48:31] <kali> DarkLinkXXXX: by default, it expects one json object exactly per line, and will be happy to load it whatever the content is
[15:48:48] <kali> DarkLinkXXXX: there is an option to import a json array of objects instead
[15:49:41] <kali> DarkLinkXXXX: that said, do not expect miracles for non-json types like dates and binary
[15:50:08] <DarkLinkXXXX> Yeah... I figured as much.
[15:50:45] <DeveloperDude> Hello everyone! I have a question that is probably very common, but I've found several answers and I'm not sure what is the proper way of doing this.
[15:50:53] <DeveloperDude> The thing is that I have a users collections and an events collection, coming from a relational background I was storing userids inside the events collection, but there are no joins in mongodb.
[15:51:00] <DeveloperDude> So there are 3 different solutions
[17:24:16] <whomp> how can i speed up my mongoimports? for example, i was thinking maybe i could convert the json to bson and import it or something
[22:00:35] <sudormf> Question. If I have 20 tables in a nosql database, and i have one database node, but its getting overloaded, so i add 3 more database nodes. now i have 4 total db nodes. will my db scale automatically to use these 4 nodes, or will i have to do any manual configuration to be able to use them?
[22:04:06] <jumpman> hey, i have a quick best practice question. i'm going for performance in a reasonably large database
[22:06:33] <jumpman> ...anyways, i'm trying to store a 2d plane of 'solar systems' each with 'planets' and each planet with a 'puzzle'. but i want to have ~15,000 solar systems with 5-10 planets each.
[22:06:55] <jumpman> i'm planning on storing the 2d plane in chunks - each with a 'solar system' location to be drawn and a position and id inside
[22:07:38] <jumpman> then the id found in a separate collection of each 'solar system' which would include the id, and planets
[22:07:47] <toothrot> nobody answers because 5 minutes passed?
[22:08:13] <jumpman> here's where the question comes in: would it be faster to store all level data in one collection with 'puzzle id' so that the 'solar systems' call is smaller
[22:08:30] <jumpman> or better to store the puzzles in the planets and just have larger units in starsystems
[22:10:06] <jumpman> basically i'm trying to decide between one collection containing ~1,000,000 puzzles and one collection containing all of the ~15,000 solar systems with the same data from the first collection
[22:10:25] <sudormf> probably the latter so there's less data to search through
[22:20:35] <tornado_terran> im trying to create analytic tool based on golang and mongodb
[22:21:25] <tornado_terran> im not familiar with mongodb specifics, what is better to use aggregation framework or to fetch a lot of rows with few fields
[22:22:22] <tornado_terran> i would like to get avarage/median value. I can use aggregation framework or in my case just fetch all rows from some time bucket with fields like firstActionAt, lastActionAT
[22:23:20] <sudormf> i dont have any idea, i'm new myself
[22:23:32] <sudormf> i asked a q 15 mins ago and no one answered me
[22:24:08] <tornado_terran> im trying to create analytic tool based on golang and mongodb. im not familiar with mongodb specifics, what is better to use aggregation framework or to fetch a lot of rows with few fields. i would like to get avarage/median value. I can use aggregation framework or in my case just fetch all rows from some time bucket with fields like firstActionAt, lastActionAT