tested with: ext3 and ext4 (does not work for xfs)
MY RECOMMENDATION: WHO CARES IF THE NAS IS DOING A AUTOMATIC REBOOT AT SUNDAY 3 o’CLOCK IN THE MORNING AND CHECKING 2-3TB OF EXT3 FILESYSTEM? NO ONE!
RELIABILITY SHOULD BE THE TOP1 PRIORITY OF ANY FILESYSTEM.
MAYBE SPEED CAN BE SECOND, UNLESS YOU DO NOT CARE ABOUT DATA-LOSS. (temporary storage… but believe me… even a temporary storage contains important files that users feel very angry about if lost)
I believe regular automatic filesystem checks (also of the root filesystem) are a pretty good idea.
So what you SHOULD do is:
1. use ext3 until 2020 than use ext4 or btrfs or whatever has proofen to be reliable on the market for 10 years. (see article “the perfect filesystem”)
2. have backups that report via mail if the backup worked or not.
3. let your filesystem be checked if people don’t need it – sunday night – once a month.
so the idea is: set partition to be checked – and cron a reboot when there is probably no usage of your server or service.
works with ext3 and ext4 but not with XFS filesystem
tune2fs allows the system administrator to adjust various tunable filesystem parameters on Linux ext2, ext3, or ext4 filesystems.
You should strongly consider the consequences of disabling mount-count-dependent checking entirely.
Bad disk drives, cables, memory, and kernel bugs could all corrupt a filesystem without marking the filesystem dirty or in error. If you are using journaling on your filesystem, your filesystem will never be marked dirty, so it will not normally be checked.
A filesystem error detected by the kernel will still force an fsck on the next reboot, but it may already be too late to prevent data loss at that point. (straight from: tune2fs.man.txt)
It is strongly recommended that either -c (mount-count-dependent) or -i (time-dependent) checking be enabled to force periodic full e2fsck(8) checking of the filesystem.
Failure to do so may lead to filesystem corruption (due to bad disks, cables, memory, or kernel bugs) going unnoticed, ultimately resulting in data loss or corruption.
if you use something else than ext3/ext4 you can try the
# first of all get a good overview of one's harddisk layout, this alias can help alias harddisks='lsblk -o '\''NAME,MAJ:MIN,RM,SIZE,RO,FSTYPE,MOUNTPOINT,UUID'\''' harddisks # sample output NAME MAJ:MIN RM SIZE RO FSTYPE MOUNTPOINT UUID sda 8:0 0 232.9G 0 ├─sda1 8:1 0 243M 0 ext2 /boot 3b92560a-e8a5-4530-b2cc-2bd712f891e0 ├─sda2 8:2 0 1K 0 └─sda5 8:5 0 232.7G 0 crypto_LUKS 479d6f50-a072-4929-9b8d-c4bcf5cdb027 └─sda5_crypt 254:0 0 232.7G 0 LVM2_member xV6KqW-Yti8-DyuT-OGH0-BG4m-aM1E-bjGQWF ├─DebianLaptop--vg-root 254:1 0 225G 0 ext4 / 67e8a185-6e1d-4dcf-89cf-d4ba28c1886a └─DebianLaptop--vg-swap_1 254:2 0 7.7G 0 swap [SWAP] c9617063-e9a0-42c2-bff9-6e1a4bb4d083 fdisk -l /dev/sda Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xf3f75746 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 499711 497664 243M 83 Linux /dev/sda2 501758 488396799 487895042 232.7G 5 Extended /dev/sda5 501760 488396799 487895040 232.7G 83 Linux # list all volume groups vgdisplay -s "DebianLaptop-vg" 232.64 GiB [232.64 GiB used / 0 free] # list all logical volumes lvdisplay --- Logical volume --- LV Path /dev/DebianLaptop-vg/root LV Name root VG Name DebianLaptop-vg LV UUID wmc9qF-dNF4-osAb-YnzF-kCFL-BOe1-axVEMJ LV Write Access read/write LV Creation host, time DebianLaptop, 2019-08-01 19:45:22 +0200 LV Status available # open 1 LV Size 224.95 GiB Current LE 57586 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 --- Logical volume --- LV Path /dev/DebianLaptop-vg/swap_1 LV Name swap_1 VG Name DebianLaptop-vg LV UUID 86WSR6-AtOC-0A45-fN7G-Ucpp-bZtf-HVB5ka LV Write Access read/write LV Creation host, time DebianLaptop, 2019-08-01 19:45:22 +0200 LV Status available # open 2 LV Size 7.70 GiB Current LE 1970 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:2 # so one got one harddisk # with 3 partitions # lvm2 and crypto_LUKS is used # /dev/sda1 is ext2 and boot partition (would not hurt to schedule fsck for this too) # /dev/sda2 just Extended Container to hold sda5 # /dev/sda5 is an encrypted volume (DebianLaptop-vg) group holding two logical volumes (root and swap_1) # does no longer work with ext4(?) # (on reboot, the file /forcefsck is deleted # but tune2fs reports no fsck was done?)
tune2fs -l /dev/DebianLaptop-vg/root | egrep -i "mount count|Check interval|Last|Next" Last mounted on: / Last mount time: Mon Sep 9 08:16:34 2019 Last write time: Mon Sep 9 08:16:34 2019 Mount count: 122 Maximum mount count: -1 <- automatic fsck is disabled Last checked: Thu Aug 1 19:45:32 2019 Check interval: 0 () # lets set it up to check on every reboot tune2fs -C 2 -c 1 /dev/DebianLaptop-vg/root tune2fs 1.43.4 (31-Jan-2017) Setting maximal mount count to 1 Setting current mount count to 2 # reboot and test shutdown -r now # what one should see on the screen is the message: DebianLaptop--vg-root has been mounted 1 times without being checked , check forced. # followed by a progress bar... # now output of tune2fs looks much different # and confirms a fsck was made on this ext4 partition on reboot tune2fs -l /dev/DebianLaptop-vg/root | egrep -i "mount count|Check interval|Last|Next" Last mounted on: / Last mount time: Mon Sep 9 10:08:03 2019 Last write time: Mon Sep 9 10:08:00 2019 Mount count: 1 Maximum mount count: 1 Last checked: Mon Sep 9 10:08:00 2019 Check interval: 0 (<none>) # if one is not using LVM2 tune2fs -C 2 -c 1 /dev/sda1; # check filesystem on every boot tune2fs -c 10 -i 30 /dev/sda1; # check sda1 every 10 mounts or after 30 days
you can check that a filesystem check was performed with:
tune2fs -l /dev/sda1 | egrep -i "mount count|Check interval|Last|Next" Last mounted on: / Last mount time: Tue Jul 4 10:27:35 2017 Last write time: Tue Jul 4 10:06:15 2017 Mount count: 19 Maximum mount count: 10 Last checked: Tue Jun 27 11:05:21 2017 Check interval: 2592000 (1 month) Next check after: Thu Jul 27 11:05:21 2017
does not seem to relibly force a filesystem check….
force filesystem check
this can only be done on unmounted partitions.
so you would have to boot from a separate partition – in order to unmount (umount) the / root-partition.
this is not possible in init 1 runlevel 1 – maintenance mode.
fsck -y -v -f /dev/sda1 e2fsck 1.43.4 (31-Jan-2017) /dev/sda1 is mounted. e2fsck: Cannot continue, aborting.
what about xfs?
it seems one has to have a bootable live distro.iso or .usbimage.
If you have an Oracle Linux Premier Support account and encounter a problem mounting an XFS file system, send a copy of the
file to Oracle Support and wait for advice.
If you cannot mount an XFS file system, you can use the xfs_repair -n command to check its consistency. Usually, you would only run this command on the device file of an unmounted file system that you believe has a problem. The xfs_repair -n command displays output to indicate changes that would be made to the file system in the case where it would need to complete a repair operation, but will not modify the file system directly.
If you can mount the file system and you do not have a suitable backup, you can use xfsdump to attempt to back up the existing file system data, However, the command might fail if the file system’s metadata has become too corrupted.
You can use the xfs_repair command to attempt to repair an XFS file system specified by its device file. The command replays the journal log to fix any inconsistencies that might have resulted from the file system not being cleanly unmounted. Unless the file system has an inconsistency, it is usually not necessary to use the command, as the journal is replayed every time that you mount an XFS file system.
#xfs_repair <em class="replaceable">device</em>
If the journal log has become corrupted, you can reset the log by specifying the -L option to xfs_repair.
Resetting the log can leave the file system in an inconsistent state, resulting in data loss and data corruption. Unless you are experienced in debugging and repairing XFS file systems using xfs_db, it is recommended that you instead recreate the file system and restore its contents from a backup.
If you cannot mount the file system or you do not have a suitable backup, running xfs_repair is the only viable option unless you are experienced in using xfs_db.
xfs_db provides an internal command set that allows you to debug and repair an XFS file system manually. The commands allow you to perform scans on the file system, and to navigate and display its data structures. If you specify the -x option to enable expert mode, you can modify the data structures.
#xfs_db [-x] <em class="replaceable">device</em>
For more information, see the
manual pages, and the help command within xfs_db.