PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 13th of March, 2018

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[05:37:39] <groslalo> hi. I am running mongodb version (3.4.10) and using the official C# driver (2.5.0) for operations against it. I am having an issue with capped collection where I am doing bulk-write-inserts. When the collection is nearly full and i add enough data for it to overflow, then the excess data does not show up in the collection. In terms of response returned by the driver, it does say that all inserts were made (number of inserted count
[05:39:24] <groslalo> I have enabled verbose logging but, cannot see any info where the storage writer (wiredtiger) is writing those entries. So, I am wondering how to troubleshoot the situation on the server-side? Would anyone have any guidelines or tips?
[05:48:49] <groslalo> is this channel being read?
[07:56:45] <himanshu> Hi, I am using mongoose and have mongodb running locally. Now I move this database to a third party. So I have changed connection URL provided. But still my server needs local mongodb service to be running. How can I stop that? Please help
[08:03:01] <KekSi> that sounds like you're opening multiple connection pools and one of them is pointing at a local instance
[08:03:19] <KekSi> so either your code or the driver is shit :x
[10:41:10] <doublehp> hello; I ned help to debug a service. mongod is a database for unifi (an other service). the only error message I get is http://paste.debian.net/1014459/ . I know the DB could be corrupted due to "out of disk space"; I do have a backup, but I don't know which files I should restaure , because I don't know where the DB is stored
[10:41:17] <doublehp> and the issue could be on an other file than the DB
[10:41:59] <Derick> doublehp: you need to check the MongoDB log file - usually in /var/log/mongodb
[10:43:45] <doublehp> Derick: http://paste.debian.net/1014460/
[10:49:14] <Derick> doublehp: have you run out of diskspace or inodes?
[10:49:37] <doublehp> << I know the DB could be corrupted due to "out of disk space" >>
[10:49:47] <Derick> that's not what I asked
[10:50:00] <doublehp> i have yesterday night
[10:50:10] <Derick> the log doesn't indicate there is corruption
[10:50:19] <Derick> it says it can't write to a file
[10:50:27] <Derick> so, are you still out of diskspace? (or inodes)?
[10:50:40] <doublehp> no
[10:51:27] <Derick> can you prove that (for both)? → pastebin please
[10:53:24] <doublehp> http://paste.debian.net/1014462/ /var/lib/mongodb/journal is physically stored on /mnt/Leon_Host_Big
[10:54:47] <Derick> I'd remove the files in /var/lib/mongodb/journal
[10:54:49] <doublehp> /var/lib/mongodb -> /mnt/Leon_Host_Big/leon-03/_var_lib_mongodb
[10:56:38] <doublehp> all files ? j._0 and prealloc* ?
[10:56:57] <doublehp> backup on way ...
[10:57:21] <Derick> the j and prealloc ones, yes
[10:57:47] <doublehp> 2018-03-13T11:57:16.981+0100 I JOURNAL [initandlisten] info preallocateIsFaster couldn't run due to: couldn't open file /var/lib/mongodb/journal/tempLatencyTest for writing errno:9 Bad file descriptor; returning false
[10:57:48] <doublehp> 2018-03-13T11:57:17.038+0100 I STORAGE [initandlisten] exception in initAndListen: 13516 couldn't open file /var/lib/mongodb/journal/j._0 for writing errno:9 Bad file descriptor, terminating
[10:58:20] <Derick> does the mongodb process have permissions to write to /var/lib/mongodb/journal ?
[10:58:47] <doublehp> last debian paste includes a touch-rm test to proove it
[10:58:57] <Derick> but as which user?
[10:59:01] <doublehp> ah, the process, which uid ?
[10:59:26] <doublehp> http://paste.debian.net/1014463/
[10:59:58] <doublehp> I used to have locking problems with NFS; so, I have moved to sshfs, and my lock tests passed (I have published about it 2y ago somewhere)
[11:01:30] <Derick> sorry, /var/lib/mongodb is on a mounted sshfs?
[11:02:43] <doublehp> yes
[11:02:51] <doublehp> at it used to work
[11:03:11] <Derick> could you perhaps remount it?
[11:03:21] <Derick> I am suspecting that to be the problem, but have no evidence
[11:03:50] <doublehp> http://forums.debian.net/viewtopic.php?f=5&t=130379 http://forums.debian.net/viewtopic.php?uid=198372&f=5&t=130412&start=0
[11:05:08] <doublehp> http://paste.debian.net/1014464/ the test was prooved to be efficient in the second link just above
[11:05:19] <doublehp> I mean, on NFS, the test fails
[11:07:18] <Derick> right - that's why i asked whether there is something in the system log about this, and whether you can remount your sshfs mount
[11:07:43] <doublehp> i have rebooted the machine recently; during reboot, remount went fine
[11:08:09] <Derick> but that was before you got your out-of-disk issue I presume?
[11:08:09] <doublehp> i have rebooted 1h ago, after freeeing disk space
[11:08:15] <Derick> OK
[11:08:39] <Derick> Only thing I can now think of is to try the system's log - to see why this goes wrong
[11:08:59] <doublehp> a tip : mongod.lock is 30mn old
[11:09:09] <doublehp> and file sticks around, even after service failure
[11:09:22] <doublehp> even after process died
[11:10:06] <doublehp> system logs just say service does not work
[11:10:13] <Derick> dmesg ?
[11:10:37] <doublehp> nfs
[11:10:43] <doublehp> http://paste.debian.net/1014465/
[11:11:23] <doublehp> dmesg contains nothing fresh (line is at uptime 194s ; system is not up since 79mn )
[11:11:57] <Derick> I'm out of ideas then :-/
[11:12:15] <Derick> try stopping the mongodb service, remove the lock file, and start it again?
[11:14:07] <doublehp> trying to move the folder back to local physical disk ...
[11:15:34] <doublehp> interesting; I had forgotten this DB was so ... grrrr http://paste.debian.net/1014466/
[11:16:01] <Derick> 2018-03-13T12:14:59.281+0100 E JOURNAL [initandlisten] Insufficient free space for journal files
[11:16:10] <Derick> I think that was the problem all along then
[11:16:58] <doublehp> remove the journal, same error: info preallocateIsFaster couldn't run due to: couldn't open file /var/lib/mongodb/journal/tempLatencyTest for writing errno:9 Bad file descriptor; returning false
[11:17:46] <doublehp> space issue is when I try using the local physical disk; this issue is the issue why I had moved to NFS/SSHFS
[11:18:07] <Derick> ah
[11:18:45] <Derick> doublehp: I'd reach out to the mongodb-user google group now then. I'm at a loss
[11:19:14] <doublehp> what is the DB file ?
[11:19:40] <Derick> everything in /var/lib/mongodb, without the ../journal bit
[11:19:58] <doublehp> local.0 and local.ns ?
[11:20:08] <Derick> that's some of them
[11:20:12] <Derick> there should be more
[11:20:28] <doublehp> no, just those two files
[11:20:47] <Derick> that's odd - has your app been using the "local" database?
[11:20:52] <doublehp> but, they are not duplicated on my backup; I wonder why rsync was unable to copy them, and forgot to report the issue
[11:21:09] <Derick> you really shouldn't be using the "local" database, as that's for system usage
[11:21:14] <doublehp> app is Unifi ... no clue which mess they do
[11:21:23] <Derick> I don't know Unifi either
[11:25:17] <doublehp> http://paste.debian.net/1014468/
[11:26:46] <Derick> you've now started it with an empty db dir, so it will default to the WiredTiger storage engine
[11:28:00] <Derick> it's odd, it's like your system declines the opening/creation of files
[11:28:32] <doublehp> http://paste.debian.net/1014469/ and this is on local physical storage ... with only 250M free ... so, having 3G available is not as much critical as it claims ...
[11:29:20] <Derick> I don't think it's the freespace, but something else, but I really don't know what.
[11:29:46] <Derick> I have a meeting now. I suggest you email the mongodb-user google groups list with these findings, unless you have a support contract that is.
[11:29:52] <doublehp> shit, I have found a clue ...
[11:30:00] <Derick> do tell...
[11:30:02] <doublehp> thanks for trying,
[11:30:16] <doublehp> i have to check many things, but ... owner issue
[11:30:32] <doublehp> files not owned by the good userid
[11:31:08] <doublehp> http://paste.debian.net/1014463/ was a trick
[11:31:23] <Derick> trick?
[11:31:46] <Derick> that folder also contains no real data btw...
[11:31:56] <doublehp> this input is different if I ls -l /var/lib/mongodb/ and the target directory ...
[11:38:19] <Derick> got it to work now though?
[11:40:22] <doublehp> no; go to your meeting
[11:40:28] <doublehp> but it's obviously a storage issue
[11:41:01] <Derick> good luck!
[12:24:30] <Forlorn> Hi, how may I findOneAndUpdate a document field from 'hello' to ['hello']?
[12:24:55] <Forlorn> do I need to findOne, first, and then findOneAndUpdate?
[12:35:51] <Robin___> hello :) how can I store _id in a variable? this wont work for some reason: https://pastebin.com/5kQ0FGrc thx
[12:36:30] <Derick> Forlorn: I'm not sure if you can update that that way so easily
[12:36:58] <Derick> Robin___: recentUser is a cursor, not an array
[12:37:19] <Derick> you need to add .toArray() behind limit()
[12:37:29] <Derick> Robin___: see https://docs.mongodb.com/manual/reference/method/cursor.toArray/
[12:37:31] <Robin___> Derick: thanks. Ill try it
[12:44:54] <Robin___> works great :)
[12:45:27] <Derick> yay
[13:14:42] <Robin___> one last question. how can I update to the current date? I thought this would work: db.login.update({ "username": "testuser"}, {$inc: {"login_amounts": 1}}, {$set: {"last_login": new Date()}}); The documentation mentions something about $currentDate, but that one is working as a field?
[13:17:11] <Derick> hm
[13:17:14] <Derick> that should work
[13:17:19] <Derick> let me check $currentDate
[13:18:05] <Robin___> when inserting a new user i set "last_login" to null as default. but after running above query its still "null" :)
[13:18:07] <Derick> ah
[13:18:09] <Derick> you can do:
[13:18:40] <Derick> db.login.update({ "username": "testuser"}, {$inc: {"login_amounts": 1}}, { $currentDate: { last_login: true } } );
[13:18:49] <Derick> no
[13:18:56] <Derick> wrong syntax :)
[13:19:11] <Derick> db.login.update({ "username": "testuser"}, {$inc: {"login_amounts": 1}, $currentDate: { last_login: true } } );
[13:19:23] <Derick> you did that wrong with your $set too
[13:19:35] <Robin___> ohh, an extra { i got there
[13:19:44] <Derick> { $inc : .., $set: .. }
[13:19:46] <Derick> and not:
[13:19:51] <Derick> { $inc : ..} , { $set: .. }
[13:20:02] <Derick> but using $currentDate is probably better
[13:21:04] <Robin___> thanks. appreciate it. it works great
[16:29:57] <doublehp> Derick: I give up. I disabled all services; I am fed up of having to maintain two daemons with 5 databases that break so easily, consume huge CPU RAM and disk, just t configure a WIFI AP ... so I disabled the services, and if the AP stops working, I through it away
[16:30:20] <Derick> you need a database for a WIFI AP?!
[16:30:45] <doublehp> not one: FIVE
[16:30:52] <Derick> which 5?
[16:31:34] <doublehp> there are two mongdb daemons (on different ports anddifferent config), with two DB each
[16:31:41] <doublehp> plus a 5th DB inside unifi itself
[16:31:47] <doublehp> plug a java stack consuming RAM
[16:32:28] <doublehp> a classic installation on Windows requires 8 to 40GB of disk; I fought to use only 800M
[16:57:34] <wavenator> Hi everyone. Recently we've got a problem with our deployed mongodb server. Suddenly it just stopped accepting new connections. The last lines from the log were just slow find/update commands. any already-connected client can still talk with the server and run commands. does anyone has any clue what the problem could be?
[17:06:41] <Robin___> I have been trying for an hour to see if a specific person (by _id) liked a post (responds.likes) but cant seem to find the right query. Is it possible with find()? :D https://pastebin.com/ByZd5Z5q
[17:07:29] <Robin___> mongodb is fun but tough sometimes ;D
[17:08:09] <Robin___> I also looked at $elemMatch but it didnt seem to be the right one
[17:08:13] <Derick> db.posts.find({ _id: ObjectId("5aa7d6a161cd9b7a418a002a"), "responds.likes.user_id": ObjectId("5aa3d8ac89e773323dced3bc")})
[17:08:16] <Derick> is the query
[17:08:50] <Robin___> damn I was so close :p
[17:10:34] <Robin___> yes it works, thx
[17:10:57] <Derick> no prob
[17:11:10] <Derick> the "." also goes into every array element
[17:11:24] <Derick> and you had an extra { .. } that meant nothing around "responds..."
[17:12:35] <Robin___> i see. is the structure of the the data OK by the way? im completely new to NoSQL, its just for learning :]
[17:12:48] <Derick> I saw no immediate red-flags
[17:13:04] <Robin___> thank you
[17:57:01] <doublehp> Derick: https://dba.stackexchange.com/questions/186478/mongodb-refuse-to-start-operation-not-permitted mongodb does not support any more ... most kind of mount methods. Had not update since 1 or 2 years, new version is uncompatible with my setup; there are workarounds, I don'tunderstand how to use them
[17:57:48] <doublehp> Derick: in short, even if sshfs is a mount point, it still stinks for mondogb (and NFS sucks even more for other DB services)
[18:28:42] <darshu> Hi guys
[18:30:04] <darshu> I m new to mongodb, so please be gentle, my queries may be silly. but be patient.
[18:31:02] <darshu> 1. i need to connect to mogodb, to nodejs using mongoose, hw can i?
[18:31:51] <darshu> mongodb***
[18:38:51] <darshu> 2. whenever i tried to connect /Post data to db, its giving me this error.
[18:38:58] <darshu> (node:23662) DeprecationWarning: Mongoose: mpromise (mongoose's default promise library) is deprecated, plug in your own promise library instead: http://mongoosejs.com/docs/promises.html