We're hiring!
*

Improving the reliability of file system monitoring tools

Gabriel Krisman Bertazi avatar

Gabriel Krisman Bertazi
March 14, 2022

Share this post:

Reading time:

A fact of life, one that almost every computer user has to face at some point, is that file systems fail. Whether it is for an unknown reason, usually explained to managers as Alpha particles flying around the data center, or a more mundane (and way more likely) reason - a software bug - users don't usually enjoy losing their data for no reason. This is why file system developers put a huge effort in not only testing their code, but also in developing tools to recover volumes when they fail. In fact, all persistent file systems deployed in production are accompanied by check and repair tools, usually exposed through the fsck front-end. Some even go a step further with online repair tools.

fsck, the file system check and repair tool, is usually run by an administrator when they suspect the volume to be corrupted, sometimes following a mount command that failed. It is also run at boot-time on every few boots in almost every distro, through the systemd-fsck service, or equivalent logic.

Indeed, fsck is quite efficient in recovering from errors of several file systems, but it sometimes requires placing the file system offline and either walking through the disk to check for errors, or poking the super block for an error status. It is not the right tool to monitor the health of a file system in real-time, raising alarms and sirens when a problem is detected.

This kind of real-time monitoring is quite important to ensure data consistency and availability in data centers. In fact, it is essential that administrators or recovery daemons be notified as soon as an error occurs, such that they can start emergency recovery procedures, like kickstarting a backup, rebuilding a RAID, replacing a disk or maybe just running fsck. And, once one needs to watch over a large quantity of machines, like in a cloud provider with hundreds of machines, a reliable monitoring tool is essential.

The problem is that Linux didn't really expose a good interface to notify applications when a file system error happened. There wasn't much going on other than the error code returned to the application that executed the failed operation, which doesn't tell much about the cause of the error, nor is useful for a health monitoring application. Therefore, the approach taken by the existing monitoring tools was to either watch the kernel log, which is a risky business, since it might be wrapped by newer messages, or to query file system specific sysfs files, which register the last error. Both approaches are polling mechanisms, subject to missing messages that would cause the notification to be lost.

This is why we worked on a new mechanism for closely monitoring volumes and notifying recovery tools and sysadmins in real-time that an error occurred. The feature, merged in kernel 5.16, won't prevent failures from happening, but will help reduce the effects of such errors by guaranteeing any listener application receives the message. A monitoring application can then reliably report it to system administrators and forward the detailed error information to whomever is unlucky enough to be tasked with fixing it.

The new mechanism leverages the fanotify interface by adding a new FAN_FS_ERROR event type, which is issued by the file systems code itself, whenever an error is detected. By leveraging fanotify, the event is now tracked on an dedicated event queue to the listener, and it won't get overwritten by further errors. We also made sure that there is always enough memory to report it, even on low memory conditions.

The kernel documentation explains how to receive and interpret a FAN_FS_ERROR event . There is also an example tracer implementation in the kernel tree.

The feature, which is already on the upstream Linux kernel, will soon pop up in distribution kernels, and be taken up by distros around the globe. Soon enough, we will have better file system error monitoring tools on data centers, and also on our Linux desktops.

Comments (0)


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Automatic regression handling and reporting for the Linux Kernel

14/03/2024

In continuation with our series about Kernel Integration we'll go into more detail about how regression detection, processing, and tracking…

Almost a fully open-source boot chain for Rockchip's RK3588!

21/02/2024

Now included in our Debian images & available via our GitLab, you can build a complete, working BL31 (Boot Loader stage 3.1), and replace…

What's the latest with WirePlumber?

19/02/2024

Back in 2022, after a series of issues were found in its design, I made the call to rework some of WirePlumber's fundamentals in order to…

DRM-CI: A GitLab-CI pipeline for Linux kernel testing

08/02/2024

Continuing our Kernel Integration series, we're excited to introduce DRM-CI, a groundbreaking solution that enables developers to test their…

Persian Rug, Part 4 - The limitations of proxies

23/01/2024

This is the fourth and final part in a series on persian-rug, a Rust crate for interconnected objects. We've touched on the two big limitations:…

How to share code between Vulkan and Gallium

16/01/2024

One of the key high-level challenges of building Mesa drivers these days is figuring out how to best share code between a Vulkan driver…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2024. All rights reserved. Privacy Notice. Sitemap.