[08:41:16] <ShekharReddy> hey guys how can i start the mongodb on ubuntu 14.04
[08:43:39] <ShekharReddy> hey guys how can i start the mongodb on ubuntu 14.04, i have started the service using sudo service mongod start
[08:43:59] <ShekharReddy> but i am unable to get the console to use the db inside it s
[08:44:09] <ShekharReddy> i am lill new here using mongo
[09:09:36] <chris|> the documentation about backup up a secondary using filesystem snapshot says I "may skip" calling fsyncLock on it if journaling is enabled. Should I read that as "you may skip, but it's recommended not to" or is it safe to do so? Also, if I do not lock the secondary, should I freeze it ?
[09:21:42] <Derick> chris|: it's recommended to do fsynclock still
[09:22:15] <Derick> it'd be even better to do it from a hidden node...
[09:25:56] <chris|> yeah, but that's not an option atm
[09:26:17] <chris|> quick followup, why is this important? When calling db.fsyncLock(), ensure that the connection is kept open to allow a subsequent call.
[09:28:16] <Derick> because the lock is released when you close the connection I think
[09:28:26] <Derick> (to prevent random locks having been set on the DB)
[09:30:21] <chris|> that would be strange, as the docs also say: Closing the connection may make it difficult to release the lock. ;)
[10:44:07] <joannac> fsyncLock(), queued writes, reads (i.e. auth checks) will queue behind the queued writes
[12:32:44] <miracee> Hello, we organise a database dev room on FrOSCon (Germany) and I am looking for speakers ... we want to provide multiple systems.
[12:41:33] <zzookk> hello guys. tell me pls. "find().sort({"thisfieldname":-1}).limit(1)" and find.max - same constructions if i want to get highest value?
[14:06:17] <KostyaSha> can mongo use multiple cores during index build?
[14:56:14] <cppking> hello guys, I got a question; if a replSet's master (A) down, then one secondary(B) will become master, if node A has a priority larger than any other node. what happens when A start after 10 hours and join the replSet, does data of 10 hours will lost ??
[15:06:30] <integer`> cppking, single node can't become primary
[15:07:18] <KostyaSha> cppking, i may mistaken, but firstly it do election based on votes and only then priority influence
[15:38:00] <cppking> KostyaSha: Do you understand me? My replSet has 3 node (A B C), C is arbiter, A has a priority larger than B, if Master A down , B will become master, 10 hours later, If I start A, will data produced in 10 hours be lost ??
[15:40:08] <cppking> because when A start, A will become master, then data is inconsist between A and B, so data will be lost;
[16:49:40] <troll_king> when i do save data in it it show mongodb.DB.connect is not a function
[16:50:09] <troll_king> console.log show [function] when i test save function
[16:50:28] <troll_king> any advice or suggestion is appreciated
[16:55:58] <StephenLynx> id recommend you to use mongodb
[17:20:45] <renlo> This might be a stupid question: at my work we have a Python web application. This web application fetches data from Mongo, then it serializes the data into a JSON format. Thing is, the number of documents that are serialized can be ~ 20k documents. When Python serializes the data, the server goes to 100% CPU usage, and it takes a number of seconds (5-10). Is there a way to get Mongo to be output as JSON?
[17:21:21] <renlo> or is that just from using a Python client which is automagically turning JSON documents into some Python dict, and then its getting turned into JSON documents again?
[17:27:07] <renlo> is there a place which is actually active for mongo related questions?
[17:27:15] <renlo> every time I'm in here its a ghost town
[17:31:32] <renlo> cheeser: so its safe to say I am probably first converting BSON to a Python object / dict, then converting the Python object / dict to JSON?
[17:31:39] <cheeser> now, those are probably not *large* docs but large and small are fairly subjective
[17:32:00] <StephenLynx> your driver is doing the first part and your application the second one.
[17:32:04] <cheeser> renlo: yes. the driver is already doing the BSON document -> python dict for you
[17:32:24] <renlo> okay thanks guys, I'll see if I can a) convert the BSON straight to JSON with the Python driver, and if that doesnt work b) look into using node
[17:32:45] <StephenLynx> c) do it using any tools mongo offers, like mongo export
[17:33:18] <StephenLynx> I think that would be the fastest to obtain a json string from the data.
[17:33:26] <renlo> it's a Django web app, how would you do that? Spawn a subprocess which runs the mongoexport command?
[17:41:50] <cheeser> i'm not super familiar with the docs but I'm not seeing anything.
[17:42:20] <cheeser> e.g., the java driver exposes a type called BsonDocument which still has the documents in their BSON form and not, say, a Map
[17:42:51] <cheeser> so there'd be no conversion of types from their BSON types to native Java types. Given this, you can encode them JSON without double decoding.
[17:43:01] <cheeser> in the pymongo docs, i'm not seeing such a type.
[17:43:54] <renlo> I found this: https://api.mongodb.com/python/current/api/bson/json_util.html , I think if I pass it the pymongo query (cursor?) then it will do it without converting it to a dict first, though I am not 100% sure aboout that
[17:53:42] <cheeser> talking to jesse, it sound like that probably does the python dict step, too.
[17:54:09] <cheeser> he said they're considering a "BsonJsonCodec" in the next quarter or two but they're still in the discussion phase.
[17:54:44] <cheeser> he said to email him at jesse@mongodb.com and introduce yourself. he said they're definitely eyeballing this use case and this could help them move forward.
[17:56:05] <cheeser> also, run this in your code: pymongo.has_c()
[17:56:31] <cheeser> if that returns false, the C extensions aren't being used and you're running the pure python bson code which is much slower than the C libs
[17:58:05] <renlo> ah, that returned True, thanks again
[17:58:53] <cheeser> oh, that's kind of a disappointment because you could've had a huge win by recompiling with the C extensions. :)
[18:00:41] <cheeser> it is. if you like Go, at least.
[18:00:53] <cheeser> it's pretty much your only option in Go
[18:02:33] <Forbidd3n> Hey everyone. Quick question. Is it best to store all elements in one data set or have multiple datasets linked by referential ID?
[18:03:11] <Forbidd3n> By data set I mean collection
[18:03:46] <cheeser> you should avoid using references as the whole point of mongodb is to store self-contained documents.
[18:04:06] <cheeser> each external reference is another round trip to the database to fetch that document.
[18:04:11] <Forbidd3n> ok, so it is best to have sub/sub/sub objects
[18:10:04] <Forbidd3n> cheeser: for example- {"schools":{"Bears":{"classes":{"Math":{"days":["M","T","W"]},"Science":{"days":["M","W"]},"English":{"days":["T","W"]}}
[18:10:41] <Forbidd3n> I know that isn't correct format, but my question is really the depth of a collection, is it relevant?
[18:12:13] <cheeser> exactly how to model your documents is hard outside the context what's in them and how they're going to be used and queried.
[18:12:39] <cheeser> i can point you to docs but other than that, it's difficult to get too exact
[18:13:49] <Forbidd3n> cheeser: I understand. I will be querying different levels using the Slim framework API and updating multiple levels
[18:18:32] <Forbidd3n> cheeser: http://blog.mongodb.org/post/87200945828/6-rules-of-thumb-for-mongodb-schema-design-part-1 : here it says if you are going to have One-to-Squillions then you should reference by ID from another collection
[18:19:36] <Forbidd3n> so static data I think I can put in one collection, and consistent changing data in it's own collection so it can be updated by ID, remove or new ones added easier than filter through other data.
[18:20:18] <StephenLynx> Forbidd3n, in that example I wouldn't nest it so deep.
[18:20:28] <StephenLynx> i'd make a separate collection before that.
[18:21:00] <Forbidd3n> StephenLynx: that is what I thought as well.
[18:39:15] <Forbidd3n> StephenLynx: if a sub item in the collection needs to be the reference to the other collection, then I would need the sub item to be its own collection as well, correct?
[18:40:05] <StephenLynx> not necessarily, but that would be the most practical way to do it.
[18:40:26] <StephenLynx> keep in mind mongodb doesn't implement references.
[18:40:33] <StephenLynx> so either way you are implementing this reference on application code.
[18:40:44] <Forbidd3n> StephenLynx: correct. I would have to store the id of the referenced collections
[18:40:48] <StephenLynx> be it a document on a collection or a sub-document on another.
[18:49:28] <Forbidd3n> so in the CruiseLine collection I would store Ships which would be 'ID' from the ship collection and in the ship collection I would store Schedule which would be each schedule for the ship - am I on the right track here?
[18:50:20] <StephenLynx> dunno, i'd need a sketch of sorts for the model
[18:50:36] <Forbidd3n> StephenLynx: ok will create a simple schema to show
[18:59:02] <Forbidd3n> StephenLynx: something like this - http://pastebin.com/sZSCkjRP
[18:59:29] <Forbidd3n> the first set would be the line collection and the second the ship collection
[18:59:44] <Forbidd3n> there will be a third for schedules
[19:04:27] <renlo> what are the fastest Mongo drivers? I think cheeser mentioned that the Java driver was faster than mongoexport or something? Anyone have any experience with the different drivers?
[19:04:31] <Forbidd3n> StephenLynx: in your example where are you linking the threads to the board?
[19:05:39] <cheeser> java, c#, go, i believe are the top 3, renlo
[19:06:25] <Forbidd3n> StephenLynx: bottom of the thread or bottom of the page?
[19:07:06] <cheeser> there's one benchmark that showed php outperforming all others but given php's ... nature I don't think anyone besides Derick and Jeremy trust that one
[19:08:11] <Forbidd3n> StephenLynx: what section? I understand what you mean by laying it out, which I can do, but trying to figure out how to link the tables and wanted to see how you are doing it
[21:18:14] <ebarault> well, i'm having random "MongoError: not authorized on XXX to execute command {YYY}" with mongo 3.2, since very recently
[21:18:19] <gaboesquivel> see what I'm trying to do with 'cell_options.environment' there
[21:18:41] <ebarault> does it ring a bell to anyone?
[21:19:51] <gaboesquivel> I only need to filter the result set based on the values contained in that field.
[21:20:53] <gaboesquivel> for instance all documents where cell_options.environment contains either {value: 'Water' } or {value: 'Gym'}
[22:07:13] <poz2k4444> Hi guys, can somebody can help me with a problem I have with mongo-connector and elastic search?
[22:32:12] <hyperboreean> hey guys, any patterns on how to iterate really big collections ? I have one which has almost 900M documents and is close to imposible to go over it, even without any filtering
[22:46:51] <gaboesquivel> thanks in advanced if anyone can help me out with that query, http://stackoverflow.com/questions/37667901/how-to-search-for-documents-containing-objects-with-specific-values-in-a-nested