[08:03:34] <xarkes> Hello, I am having an issue with mongodb. I am trying to insert a document which contains many documents, here is a sample: http://pastebin.com/HdmqX6Sj
[08:03:58] <xarkes> And then I get a 'key too large to index, failing' which sounds false to me because no 'key' element seems to big
[08:04:20] <xarkes> I would have said perhaps the value field contains too many documents but ... not the key, anyway how am I supposed to do ?
[08:16:26] <xarkes> Okay according to this: http://www.blogjava.net/czihong/articles/370841.html "Documents which fields have values (key size in index terminology) greater than this size can not be indexed."
[08:16:32] <xarkes> Which is why I'm getting this error.
[09:06:14] <KekSi> i'm having trouble bootstrapping a replicaset in docker containers: i wrote a script that is executed when the (initial primary) container is launched
[09:07:14] <KekSi> it fires up the instance with replSet parameters and --fork, then runs the bootstrap (which calls rs.initiate setting up the other containers)
[09:08:36] <KekSi> i then want to restart it without --fork but (since i specified the only other instance as priority 0 (being just a read/backup replica that can never become primary))
[09:09:39] <KekSi> so "regular" db.adminCommand({shutdown:1,timeoutSecs:30}) yields me that there are no electable secondaries caught up and using force:true fails: DBClientCursor::init call() failed; Error: error doing query: failed
[09:10:48] <KekSi> does anyone have an idea of doing this another way? (maybe without starting with --fork and restarting but instead maybe doing the bootstrap/init for the rs in a different automated way?)
[09:19:08] <KekSi> i'm especially curious why forceful shutdown doesn't work in a scripted way but does when i do it manually
[09:40:07] <Keplair> I some Troubles with Mongodb, Currently My cluster is ok, it works fine, the Mongos Service for the Routers is pretty crappy, the script don't really work for me. I tried to copy the same procedure like the Mongod service but i don't know why there's some shits. I'm on Centos7.2 with MongoDB 3.2
[09:40:46] <Keplair> i always need to start "Manually" the Mongos Instance
[12:09:56] <jayjo> From a conceptual perspective, which is better for uploading data into mongodb... a very huge single file of multiple json documents, or many files of a small json document?
[12:14:48] <kurushiyama> Storing JSON as a file in MongoDB makes little to no sense.
[12:17:13] <kurushiyama> jayjo Store the individual JSONs as documents. Problem solved.
[12:18:47] <jayjo> Sorry I wasn't clear. The JSON document is about 150 bytes on average, and doesn't deviate much from this. I have files that are json documents separated by newlines. Should I concat them to one giant file to use mongoimport or make many different files with single jsons?
[13:51:51] <StephenLynx> not to mention when the data you are serving is not EXACTLY the data you are storing.
[13:52:00] <StephenLynx> and you have to do some processing.
[13:54:39] <kurushiyama> StephenLynx Well, calculating such data is easy enough, imho and actually fast enough during runtime. I am not sure what you mean by exactly the same. JSON neither guarantees order of top level fields nor should one rely on it. But maybe I am just not getting the problem.
[13:55:17] <kurushiyama> StephenLynx Storing them in GridFS would be the option, then. And not as a standard doc, I guess.
[13:57:47] <kurushiyama> StephenLynx Which comes with other intricacies. Queryable as per JSON fields? meh. I'd rather store the metadata on insert or update (findAndModify comes to my mind) than have data which is totally queryable opaqued as an encoded []byte.
[13:58:54] <StephenLynx> which seems to be reinventing the wheel.
[13:59:23] <StephenLynx> you already have this convention across mongodb drivers.
[14:00:43] <cheeser> all drivers should have that functionality now. https://github.com/mongodb/specifications/blob/master/source/gridfs/gridfs-spec.rst
[14:26:49] <Industrial> How do I query all documents that have the same 'deviceID', where there are two or more documents?
[14:52:41] <aeonFlux> I have a large json file (1m+ entries) that i'm parsing (checking if an ID exists in mongo, if not, insert the data). when i first start the script it parses 20-50 per second. after an hour that drops to 5-10 and after another hour it's barely 1 per second. looks like it starts to slow once mongo hits the full 16gb of ram in the server. thoughts on
[17:19:55] <cheeser> well, whatever that script says when running mongod, c-n-p that line to your shell
[17:22:45] <Ben_1> cheeser: now I can not start the daemon anymore Failed to unlink socket file /tmp/mongodb-27017.sock errno:1 Operation not permitted
[17:24:13] <cheeser> is the other mongod still running?
[21:01:57] <StephenLynx> not only its unnecessary but I never seem one that wasn't bad.
[21:03:30] <jayjo> Compass is pretty good to explore data. definitely depends on the use. And I do think this is the best I've found and its by mongodb, inc
[21:03:34] <cheeser> 1. it's not unnecessary. 2. you've never used it so how can you judge it? it's ok to not have, or at least not voice, an opinion on everything you come across
[21:03:54] <cheeser> it's been quite popular and very useful with those that have actually used it.
[21:04:12] <StephenLynx> I don't trust popularity, though.
[21:04:16] <StephenLynx> PHP is really popular too.