PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 27th of June, 2016

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[02:24:29] <Jonno_FTW> in pymongo, how can I print the query it uses?
[07:42:51] <sumi> hello
[08:03:34] <xarkes> Hello, I am having an issue with mongodb. I am trying to insert a document which contains many documents, here is a sample: http://pastebin.com/HdmqX6Sj
[08:03:58] <xarkes> And then I get a 'key too large to index, failing' which sounds false to me because no 'key' element seems to big
[08:04:20] <xarkes> I would have said perhaps the value field contains too many documents but ... not the key, anyway how am I supposed to do ?
[08:16:26] <xarkes> Okay according to this: http://www.blogjava.net/czihong/articles/370841.html "Documents which fields have values (key size in index terminology) greater than this size can not be indexed."
[08:16:32] <xarkes> Which is why I'm getting this error.
[09:06:14] <KekSi> i'm having trouble bootstrapping a replicaset in docker containers: i wrote a script that is executed when the (initial primary) container is launched
[09:07:14] <KekSi> it fires up the instance with replSet parameters and --fork, then runs the bootstrap (which calls rs.initiate setting up the other containers)
[09:08:36] <KekSi> i then want to restart it without --fork but (since i specified the only other instance as priority 0 (being just a read/backup replica that can never become primary))
[09:09:39] <KekSi> so "regular" db.adminCommand({shutdown:1,timeoutSecs:30}) yields me that there are no electable secondaries caught up and using force:true fails: DBClientCursor::init call() failed; Error: error doing query: failed
[09:10:48] <KekSi> does anyone have an idea of doing this another way? (maybe without starting with --fork and restarting but instead maybe doing the bootstrap/init for the rs in a different automated way?)
[09:19:08] <KekSi> i'm especially curious why forceful shutdown doesn't work in a scripted way but does when i do it manually
[09:31:56] <Keplair> Yo!
[09:35:13] <Keplair> Someone's here ?
[09:35:55] <Derick> plenty of people
[09:36:06] <Keplair> :o
[09:36:08] <Derick> Keplair: on IRC, it's best to say "hi!" and then just ask your question
[09:36:16] <Keplair> Okay sorry
[09:40:07] <Keplair> I some Troubles with Mongodb, Currently My cluster is ok, it works fine, the Mongos Service for the Routers is pretty crappy, the script don't really work for me. I tried to copy the same procedure like the Mongod service but i don't know why there's some shits. I'm on Centos7.2 with MongoDB 3.2
[09:40:46] <Keplair> i always need to start "Manually" the Mongos Instance
[09:57:47] <Keplair> :(
[10:23:23] <kurushiyama> Keplair I use the mongodb user's crontab for starting mongos @reboot ;)
[10:23:49] <kurushiyama> Keplair sudo -u mongodb crontab -e
[11:22:54] <Keplair> thx kurushiyama ! i will try
[12:09:56] <jayjo> From a conceptual perspective, which is better for uploading data into mongodb... a very huge single file of multiple json documents, or many files of a small json document?
[12:14:22] <kurushiyama> jayjo Neither.
[12:14:48] <kurushiyama> Storing JSON as a file in MongoDB makes little to no sense.
[12:17:13] <kurushiyama> jayjo Store the individual JSONs as documents. Problem solved.
[12:18:47] <jayjo> Sorry I wasn't clear. The JSON document is about 150 bytes on average, and doesn't deviate much from this. I have files that are json documents separated by newlines. Should I concat them to one giant file to use mongoimport or make many different files with single jsons?
[12:23:20] <kurushiyama> jayjo Again. Neither nor.
[12:23:31] <kurushiyama> Simply save them as docs ;)
[12:23:55] <kurushiyama> jayjo If you need to access one, you only have to do a match and return it.
[13:48:56] <StephenLynx> actually
[13:49:10] <StephenLynx> storing json as a file makes sense if you have to serve the json file directly.
[13:49:16] <StephenLynx> plus there is file metadata.
[13:49:33] <StephenLynx> how large it is, its path.
[13:49:38] <StephenLynx> checksum.
[13:49:47] <StephenLynx> last time it was modified.
[13:50:30] <StephenLynx> lots of data that would take time to figure at runtime.
[13:50:52] <StephenLynx> jayjo, kurushiyama
[13:51:51] <StephenLynx> not to mention when the data you are serving is not EXACTLY the data you are storing.
[13:52:00] <StephenLynx> and you have to do some processing.
[13:54:39] <kurushiyama> StephenLynx Well, calculating such data is easy enough, imho and actually fast enough during runtime. I am not sure what you mean by exactly the same. JSON neither guarantees order of top level fields nor should one rely on it. But maybe I am just not getting the problem.
[13:55:17] <kurushiyama> StephenLynx Storing them in GridFS would be the option, then. And not as a standard doc, I guess.
[13:55:38] <StephenLynx> yes, as gridfs files.
[13:57:47] <kurushiyama> StephenLynx Which comes with other intricacies. Queryable as per JSON fields? meh. I'd rather store the metadata on insert or update (findAndModify comes to my mind) than have data which is totally queryable opaqued as an encoded []byte.
[13:58:54] <StephenLynx> which seems to be reinventing the wheel.
[13:59:23] <StephenLynx> you already have this convention across mongodb drivers.
[13:59:27] <StephenLynx> to abstract files.
[13:59:49] <StephenLynx> some even let you handle them exactly the same way you handle actual files.
[13:59:52] <StephenLynx> like the node.js driver.
[14:00:00] <StephenLynx> which provide streams.
[14:00:43] <cheeser> all drivers should have that functionality now. https://github.com/mongodb/specifications/blob/master/source/gridfs/gridfs-spec.rst
[14:01:37] <StephenLynx> interesting.
[14:01:51] <StephenLynx> I didn't know streaming was part of the gridfs specs.
[14:02:18] <cheeser> the work of the last, say, 6 months or so.
[14:02:27] <cheeser> it's still evolving a bit but mostly in the finer details.
[14:26:05] <Industrial> Hi!
[14:26:49] <Industrial> How do I query all documents that have the same 'deviceID', where there are two or more documents?
[14:52:41] <aeonFlux> I have a large json file (1m+ entries) that i'm parsing (checking if an ID exists in mongo, if not, insert the data). when i first start the script it parses 20-50 per second. after an hour that drops to 5-10 and after another hour it's barely 1 per second. looks like it starts to slow once mongo hits the full 16gb of ram in the server. thoughts on
[14:52:42] <aeonFlux> how to remedy this?
[15:02:33] <direwolf> hey , i have a field which array of json i want to query over it how can i do it ..?
[15:02:48] <direwolf> {
[15:02:50] <direwolf> "_id" : ObjectId("571657f80d261af10a1a274e"),
[15:02:51] <direwolf> "name" : "HEAVEN",
[15:02:53] <direwolf> "user_name" : "chindi",
[15:02:54] <direwolf> "song_id" : [
[15:02:56] <direwolf> "56cc6edfe3b7a2270fea90dd"
[15:02:58] <direwolf> ],
[15:02:59] <direwolf> "shared" : true,
[15:03:00] <direwolf> "rating_count" : 4,
[15:03:02] <direwolf> "rating" : 10,
[15:03:03] <direwolf> "users" : [
[15:03:05] <direwolf> {
[15:03:06] <direwolf> "user_name" : "pankaj",
[15:03:08] <direwolf> "rating" : "2"
[15:03:10] <direwolf> },
[15:03:11] <direwolf> {
[15:03:12] <direwolf> "user_name" : "pankaj",
[15:03:14] <direwolf> "rating" : "5"
[15:03:15] <direwolf> }
[15:03:17] <direwolf> ]
[15:03:18] <direwolf> }
[15:04:00] <cheeser> direwolf: use a pastebin please
[15:04:02] <Derick> direwolf: please use a pastebin
[15:04:32] <direwolf> okk
[15:05:22] <direwolf> http://pastebin.com/rsXLCMhn
[15:05:33] <direwolf> i hav a document like this
[15:06:37] <direwolf> db.playlists.find({"_id":ObjectId("571657f80d261af10a1a274e"),users:{"user_name":"pankaj","rating":"2" } }).pretty()
[15:07:06] <direwolf> this query works fine but my rating field can be any number between 1 to 5
[15:07:27] <direwolf> when i write regex it does not prints the above document
[15:07:42] <direwolf> Derick: how can i do it
[15:09:18] <direwolf> cheeser: plz can u look through it ?
[15:12:43] <Ben_1> hi
[15:13:00] <Ben_1> installed mongoDB in a fedora 23 vm via dnf and tried to start it
[15:13:10] <cheeser> direwolf: https://docs.mongodb.com/manual/tutorial/query-documents/#query-on-arrays
[15:13:21] <Ben_1> but it fails with the following message mongod.service: Control process exited, code=exited status=100
[15:14:15] <Ben_1> Insufficient free space for journal files, Please make at least 3379MB available in /var/lib/mongodb/journal or use --smallfiles
[15:14:24] <Ben_1> but I have 5gb available on that partition
[15:15:05] <Ben_1> someone an idea what could be wrong?
[15:16:25] <direwolf> cheeser: dot notation means i can go for users.user_name ?
[15:17:22] <direwolf> cheeser: sry i am kind of new in mongo db
[15:19:26] <direwolf> cheeser: thnks i got it
[15:23:27] <aeonFlux> any thoughts on mongo slow down after an hour or two of use?
[15:26:54] <cheeser> check indexes
[15:38:04] <aeonFlux> hmmm
[17:04:53] <Ben_1> is there another log file than mongod.log in /var/log/ ?
[17:05:08] <Ben_1> I get an exception while using bulk write but there is no detail message
[17:05:28] <Ben_1> I don't know what's wrong and in mongod.log there is no log message for this exception
[17:07:45] <cheeser> sharded? mongos?
[17:08:14] <Ben_1> plain mongodb instance without sharding
[17:09:14] <Ben_1> cheeser: or is mongod.log the only log file for mongo daemon?
[17:09:18] <cheeser> if there's nothing in the logs, you might try starting mongod manually and watch the shell window where you start it.
[17:09:26] <Ben_1> mh ok thanks :)
[17:16:07] <Ben_1> exception in initAndListen: 29 Data directory /data/db not found., terminating
[17:16:26] <Ben_1> cheeser: does that mean starting mongod manually won't use the default configuration?
[17:18:31] <cheeser> you'd pass it the same mongod.conf your startup scripts do
[17:18:58] <Ben_1> my startup script is systemctl :P
[17:19:05] <Ben_1> sudo service mongod start
[17:19:07] <Ben_1> that's it
[17:19:11] <cheeser> which uses a script
[17:19:31] <Ben_1> yah but that script is using a default value
[17:19:44] <Ben_1> and I don't know which :p
[17:19:55] <cheeser> well, whatever that script says when running mongod, c-n-p that line to your shell
[17:22:45] <Ben_1> cheeser: now I can not start the daemon anymore Failed to unlink socket file /tmp/mongodb-27017.sock errno:1 Operation not permitted
[17:24:13] <cheeser> is the other mongod still running?
[17:24:54] <Ben_1> no
[17:25:52] <cheeser> the you probably just have an errant socket file that can be deleted.
[17:26:31] <Ben_1> mh ok I try it
[17:29:58] <Ben_1> cheeser: mongod tries to unlink a file that does not exist anymore
[17:30:06] <Ben_1> so I can not delete it
[17:30:20] <cheeser> que?
[17:30:53] <cheeser> if it can't unlink the file it might be permissions.
[17:31:31] <Ben_1> I run it with sudo so it should be no persmission problem
[17:36:20] <Ben_1> cheeser: ok now it works, don't know what it was
[17:38:25] <Ben_1> do you know where the mongod.service is located?
[17:38:34] <cheeser> i don't
[17:39:20] <Ben_1> ah ok init.d directory
[19:19:38] <jayjo> does anyone use compasss?
[19:19:55] <jayjo> MongoDB Compass ?
[19:37:20] <kurushiyama> jayjo Only briefly.
[19:47:17] <jayjo> It seems like a nice tool - I made a few test connections and can't delete them though. Probably stored in a config somewhere?
[19:55:32] <troydm> is there a way to insert document if it doesn't exists or update it if it's there just based on one function
[19:55:45] <troydm> something like a simple key/value store put operation
[19:56:02] <troydm> or I should write my own operations?
[19:58:05] <troydm> nvm, found it
[19:58:07] <troydm> safe
[19:58:09] <troydm> *save
[20:49:31] <StephenLynx> whats compass
[20:51:08] <StephenLynx> seems to be cancer though
[20:52:17] <cheeser> you don't even know what it is but you're calling it cancer?
[20:53:14] <jayjo> cheeser: do you happen to know how to remove old connections?
[20:54:14] <cheeser> to do what now?
[20:54:47] <jayjo> sorry talking about Compass still... created test connections but can't remove them
[20:55:29] <cheeser> it's built on node so it's whatever semantics that driver uses.
[20:56:01] <cheeser> how connections are we talking about here?
[20:56:08] <cheeser> if you want the connections closed, why not close Compass?
[20:56:35] <jayjo> in the favorites or recent connections?
[20:56:55] <cheeser> where are you seeing these connections?
[20:57:21] <cheeser> if you run mongostat are you seeing runaway connections? or are you just seeing them listed in a "recent connections" menu?
[20:58:28] <jayjo> Yes, the recent connections. I was just trying to get organized - I have a lot of these past connections
[20:58:42] <StephenLynx> cheeser, I went to their site, actually.
[20:58:54] <cheeser> and what's the problem exactly? just that they're listed there?
[20:59:11] <cheeser> StephenLynx: https://www.mongodb.com/products/compass ?
[20:59:20] <StephenLynx> yes
[20:59:40] <jayjo> yea just that they're listed. Sounds trivial now
[21:00:04] <cheeser> how can that possible be considered "cancer?"
[21:01:33] <StephenLynx> GUI for mongo.
[21:01:57] <StephenLynx> not only its unnecessary but I never seem one that wasn't bad.
[21:03:30] <jayjo> Compass is pretty good to explore data. definitely depends on the use. And I do think this is the best I've found and its by mongodb, inc
[21:03:34] <cheeser> 1. it's not unnecessary. 2. you've never used it so how can you judge it? it's ok to not have, or at least not voice, an opinion on everything you come across
[21:03:54] <cheeser> it's been quite popular and very useful with those that have actually used it.
[21:04:12] <StephenLynx> I don't trust popularity, though.
[21:04:16] <StephenLynx> PHP is really popular too.
[21:04:18] <StephenLynx> so as windows.
[21:04:20] <StephenLynx> jquery
[21:04:25] <StephenLynx> the list goes on and on.
[21:04:40] <cheeser> you don't trust anything and so devalue your opinion
[21:04:46] <StephenLynx> I trust linux :v
[21:05:01] <cheeser> some people..
[21:05:43] <Derick> yeah