August 07, 2020
Software testing is fundamental to ensure quality and longevity of a project, as it can help avoid regressions. But a test suite that is hard to use may scare contributors. In Weston, we have a test suite that is easy to understand and contribute to, which is great. It runs automatically in GitLab CI, but there are many cases where users may want to test changes locally before pushing commits. Until now, this was a very easy task:
ninja test was all the user had to run.
Recently we have added support for DRM-backend tests. But if users simply run
ninja test, the DRM-backend tests will get skipped. For most use cases this is not a big deal, but if the changes touch Weston's DRM-backend it is a good idea to exercise the DRM-backend. In this blog post, we are going to learn how to run Weston tests locally to validate changes in the DRM-backend. In other words, how to run the test suite without skipping the DRM-backend tests.
Note: At the time of this writing, there is only a smoke test for the DRM-backend. But the plan is to add useful tests for it in the future.
Before going into details about the setup, let's explain some key concepts. You can skip this section if you are already familiar with Weston, VKMS and virtme.
Weston is the reference implementation of a Wayland compositor. It is a minimal and fast compositor written in C. Wayland is the communication protocol used by a Wayland compositor and its clients. It is a simpler and easier to maintain replacement for X. There are many reasons to explain why X is being replaced, but we will not explore this in this blog post.
In the Linux kernel we have the Direct Rendering Manager (DRM) subsystem. It is part of the graphics stack and is responsible to manage the graphics cards and to provide services for the graphics drivers. Some of its main functions are memory management, job execution (there may be multiple applications making use of the GPU's) and also handling the displays that are connected to the graphics cards.
To handle the connected displays, the DRM subsystem has to gather information about their modes and then allow userspace to select what mode to use. A mode is basically the screen resolution, color depth and refresh rate in which the display is going to operate. Also, userspace can create a framebuffer from where content can be scanned out to a display. This is named Kernel Mode Setting (KMS).
Now let's suppose that Weston wants to create some tests to ensure that it is doing its part in order to make kernel perform KMS correctly. A display is obviously needed, as there are no modes available if no display is connected to the graphics card. So knowing that GitLab CI machines are headless (no displays connected), how can we run these types of tests on the CI?
Virtual KMS (VKMS) joins the party. In short, VKMS is a KMS driver that will pretend that a display is connected to the machine, providing modes for the users and also performing the other tasks that a KMS driver should. So userspace can make use of KMS in headless machines. As it is not a real hardware, it also allow us to expand the testing possibilities.
Note: VKMS is a recent project, so there may be features that one would expect from a KMS driver that are not implemented yet. In other words, it does not substitute a real graphics card yet. So when you modify a feature in Weston's DRM-backend, you should test it using your real graphics card as well.
Before explaining what virtme is, we have to explain what QEMU is, as virtme is based on it. QEMU is a software that is capable of emulating machines. It dynamically translates instructions from guest to host.
guestis the operating system that runs in the virtual machine (as an application of the host operating system). It will have its instructions translated so they can be understood by the host.
hostis the operating system that runs the virtual machine. It is not necessarily a system that is running on real hardware, as we can have layers of virtualization. But in the end it requires real hardware to run the instructions.
In our case we will have both the guest and host systems based on
x86. So you may wonder: why is QEMU useful in this situation? Why do we need virtualization at all?
In my daily routine I have to modify and compile the kernel. It would be awful to reboot my machine every time just to do some quick tests. Also, if something goes wrong I would get stuck outside my workstation. These are just a few examples of advantages of using a virtual machine, but there are others.
virtme is a set of tools to run a virtualized Linux kernel. It can be seen as a wrapper around QEMU. We could use QEMU as well, but virtme is easier to setup and also it can use host's rootfs, so we do not have to maintain another disk image, install/update packages, etc. Another advantage is that you do not need to create a shared folder between host and guest, as they use the same rootfs.
Now let's explain how to locally run Weston tests without skipping DRM-backend tests. We are going to use virtme and VKMS.
First of all we need a Linux kernel image. In my specific case, sometimes I have to modify VKMS as well. So I'm going to teach you how I do. This is also what happens in Weston's GitLab CI.
virtme is a flexible tool. If you don't want to compile the kernel, it has a mechanism to use the kernel image of your host system. I recommend you to run
virtme-run --helpin case you want to explore other possibilities.
So we are going to clone Linus Torvald's Git tree and change our working directory to it:
$(host) cd /home/<your-user> $(host) git clone --depth=1 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git $(host) cd linux
Now we are going to compile the kernel. But before running
make, let's setup a few things. First we want to modify the
.config file to create a kernel image suitable for our virtual machine. To enable virtualization for x86_64 machines, run the following. It will automatically set the needed parameters in the
.config file for you:
$(host) make x86_64_defconfig $(host) make kvm_guest.config
I usually have 2 cards in my setup: VKMS and QEMU's graphics card. This way I can run tests in both. VKMS is the way to go in CI's, but we can not actually display images using it. So I use QEMU's graphics card when I want to see the content. For instance, let's say that we have a Weston DRM-backend test that shows a red surface in the screen. When I want to make sure that the surface is being shown, I run the tests with this card. Otherwise, I stick to VKMS.
In order to enable VKMS and QEMU's graphics card, add the following to
.config (if not already set):
We also want to install
ccache (compiler cache). It will be useful to speed up build times for the kernel. If this is the first time you compile it, it will be slow anyway. But if you compile it again, it will be way faster. The command to install it depends on your distribution, so I'm omitting it. Now let's compile the kernel using 4 cores and using ccache:
$(host) make CC="ccache gcc" -j4
We have just created a Linux kernel image and it is stored in
virtme has some dependencies: QEMU, Python 3 and, depending on your setup, busybox. In our case busybox is not needed and, as Python 3 is probably already installed on your machine, you only have to install QEMU.
In order to install virtme, download its source code, move the working directory to it and run install:
$(host) cd /home/<your-user> $(host) git clone https://github.com/ezequielgarcia/virtme $(host) cd virtme $(host) sudo python setup.py install
Note: In this fork we have a patch to include
--script-dircommand line option. With that we can run scripts that are in a certain folder when virtme starts. The upstream also have some commands to reach the same results:
--script-exec. The problem is that if we use them, the program will call this function which is not complete yet. I've tried to find a way to run it anyway but it was becoming hackery. So I decided to simply use this fork.
$(host) virtme-run --kimg /home/<your-user>/linux/arch/x86/boot/bzImage --rw --graphics --qemu-opts -m 4G -smp 2
--kimgpoints to the kernel image that we have created.
--rwmakes virtme mount host's rootfs with read and write capabilities.
--graphicsis necessary if we want to use QEMU's graphics card.
--qemu-optslet us choose any option available in QEMU. Here we set
-m 4Gto let it use 4GB of RAM and
-smp 2to allow it to run using 2 cores.
If everything went well, you should now be in the guest system. Run the following command and see
$(guest) ls /dev/dri
You can also run the following command to retrieve information about the devices. It is useful to find out in which node is each device (VKMS and QEMU's graphics card):
$(guest) udevadm info /dev/dri/card0 $(guest) udevadm info /dev/dri/card1
ATTENTION: Host and guest are sharing the same rootfs and we have started virtme with
--rw, so be careful to not mess with host's files.
Now we are in the guest system. First of all, let's create the runtime directory and change its permissions. Then we set some environment variables that are necessary in order to run the tests. Change `card0` to the one you want to use to run DRM-backend tests:
$(guest) mkdir -p /tmp/tests $(guest) chmod 0700 /tmp/tests $(guest) export XDG_RUNTIME_DIR=/tmp/tests $(guest) export WESTON_TEST_SUITE_DRM_DEVICE=card0
Now we are ready to run Weston tests. Change the working directory to Weston's directory. Finally, run the tests:
$(guest) cd /home/<your-user>/weston $(guest) meson build $(guest) cd build $(guest) ninja test
You can see the detailed log in
/home/<your-user>/weston/build/meson-logs/testlog.txt. Also, if a test fails Weston will show its log in
stdout. In case we want to change the card where DRM-backend tests run, we can simply run
export WESTON_TEST_SUITE_DRM_DEVICE=cardN and then
ninja test again.
Note: To leave virtme, type Ctrl+A X
Instead of running virtme and then a couple of commands inside it, we have a better approach. We can point to a folder with the scripts that virtme should run at startup. That is how we do in GitLab CI and it is also useful locally. The folder that contains the scripts is
/home/<your-user>/weston/.gitlab-ci/virtme-scripts, so we should run the tests like this:
$(host) cd /home/<your-user>/weston $(host) meson build $(host) cd build $(host) virtme-run --rw --pwd --kimg /home/<your-user>/linux/arch/x86/boot/bzImage --script-dir ../.gitlab-ci/virtme-scripts --qemu-opts -m 4G -smp 2
Note: The script
/home/<your-user>/weston/.gitlab-ci/virtme-scripts/run-weston-tests.shis set to use
card0. As we are not using
--graphicsto start virtme, the only graphics card available is VKMS and so we ensure that it is
card0. In order to make the script work with QEMU's graphics card, edit the script to use the correct card number and start virtme with
--graphics. If the user has both VKMS and QEMU's graphics card enabled, they are not guaranteed to be respectively
card1, as it depends on DRM driver initialization which is usually not deterministic. Fixing this is not in the scope of this blog post, so users are responsible to select correct card number.
If you want to learn more, check the patch that introduced the use of the script in Weston's GitLab CI.
VKMS is a recent but impactful project, as it plays an important role and can help compositor developers avoid regressions. Also, we have seen that virtme is a flexible tool for scripting and how it helped to run DRM-backend test in Weston's GitLab CI. And that's all for today! Hope that you can use this knowledge to enhance the test suite of your own project.
If you want to learn more about virtme, QEMU and virtualization, I recommend you the following readings:
Monado now has initial support for 6DoF ("inside-out") tracking for devices with cameras and an IMU! Three free and open source SLAM/VIO…
When developing an application or a library, it is very common to want to run it without installing it, or to install it into a custom prefix…
An incredible amount has changed in Mesa and in the Vulkan ecosystems since we wrote the first Vulkan driver in Mesa for Intel hardware…
Every file system used in production has tools to try to recover from system crashes. To provide a better infrastructure for those tools,…
The PipeWire project made major strides over the past few years, bringing shiny new features, and paving the way for new possibilities in…
Over the past 18 months, we have been on a roller-coaster ride developing futex2, a new set of system calls. As part of this effort, the…