We're hiring!
*

Improving the reliability of file system monitoring tools

Gabriel Krisman Bertazi avatar

Gabriel Krisman Bertazi
March 14, 2022

Share this post:

Reading time:

A fact of life, one that almost every computer user has to face at some point, is that file systems fail. Whether it is for an unknown reason, usually explained to managers as Alpha particles flying around the data center, or a more mundane (and way more likely) reason - a software bug - users don't usually enjoy losing their data for no reason. This is why file system developers put a huge effort in not only testing their code, but also in developing tools to recover volumes when they fail. In fact, all persistent file systems deployed in production are accompanied by check and repair tools, usually exposed through the fsck front-end. Some even go a step further with online repair tools.

fsck, the file system check and repair tool, is usually run by an administrator when they suspect the volume to be corrupted, sometimes following a mount command that failed. It is also run at boot-time on every few boots in almost every distro, through the systemd-fsck service, or equivalent logic.

Indeed, fsck is quite efficient in recovering from errors of several file systems, but it sometimes requires placing the file system offline and either walking through the disk to check for errors, or poking the super block for an error status. It is not the right tool to monitor the health of a file system in real-time, raising alarms and sirens when a problem is detected.

This kind of real-time monitoring is quite important to ensure data consistency and availability in data centers. In fact, it is essential that administrators or recovery daemons be notified as soon as an error occurs, such that they can start emergency recovery procedures, like kickstarting a backup, rebuilding a RAID, replacing a disk or maybe just running fsck. And, once one needs to watch over a large quantity of machines, like in a cloud provider with hundreds of machines, a reliable monitoring tool is essential.

The problem is that Linux didn't really expose a good interface to notify applications when a file system error happened. There wasn't much going on other than the error code returned to the application that executed the failed operation, which doesn't tell much about the cause of the error, nor is useful for a health monitoring application. Therefore, the approach taken by the existing monitoring tools was to either watch the kernel log, which is a risky business, since it might be wrapped by newer messages, or to query file system specific sysfs files, which register the last error. Both approaches are polling mechanisms, subject to missing messages that would cause the notification to be lost.

This is why we worked on a new mechanism for closely monitoring volumes and notifying recovery tools and sysadmins in real-time that an error occurred. The feature, merged in kernel 5.16, won't prevent failures from happening, but will help reduce the effects of such errors by guaranteeing any listener application receives the message. A monitoring application can then reliably report it to system administrators and forward the detailed error information to whomever is unlucky enough to be tasked with fixing it.

The new mechanism leverages the fanotify interface by adding a new FAN_FS_ERROR event type, which is issued by the file systems code itself, whenever an error is detected. By leveraging fanotify, the event is now tracked on an dedicated event queue to the listener, and it won't get overwritten by further errors. We also made sure that there is always enough memory to report it, even on low memory conditions.

The kernel documentation explains how to receive and interpret a FAN_FS_ERROR event . There is also an example tracer implementation in the kernel tree.

The feature, which is already on the upstream Linux kernel, will soon pop up in distribution kernels, and be taken up by distros around the globe. Soon enough, we will have better file system error monitoring tools on data centers, and also on our Linux desktops.

Comments (0)


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

WhisperSpeech: Exploring New Horizons in Text-to-Speech Technology

13/09/2023

Text-to-speech (TTS) models are playing a transformative role, from enriching audiobooks to enhancing podcasts and even improving interactions…

Bridging IIO and Input in Linux

21/08/2023

In Linux, the Industrial Input/Output subsystem manages devices like Analog to Digital Converters, Light sensors, accelerometers, etc. On…

Pushing testing laboratory performance limits by benchmarking LAVA - Part 1

10/08/2023

Collabora's main testing laboratory has grown to automate testing on over 150 devices of about 30 different types. The lab receives job…

Persian Rug - It really ties the Rust room together

09/08/2023

Rust is a modern language known for its memory safety, efficiency, and wide range of high-level features. But many beginners also run into…

Triple Threat: The Power of Transcription, Summary, and Translation

03/08/2023

At Collabora, we're committed to bringing people together. That's why we're pushing state-of-the-art machine-learning techniques like Large…

Booting on Radxa's Rock-5B without any media used

31/07/2023

I have been working on getting U-boot upstream up to speed for the Radxa Rock-5B Rockchip RK3588 board. One of the cool features that I…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2023. All rights reserved. Privacy Notice. Sitemap.