We're hiring!
*

Pushing testing laboratory performance limits by benchmarking LAVA - Part 1

Paweł Wieczorek avatar

Paweł Wieczorek
August 10, 2023

Share this post:

Reading time:

Collabora's main testing laboratory has grown to automate testing on over 150 devices of about 30 different types. The lab receives job submissions from several CI systems, e.g. KernelCI, MesaCI, and Apertis QA.

Automating validation work is definitely convenient for developers, but how much more can underlying software (LAVA) scale (for supported devices and submission systems)? It is crucial to predict its possible bottlenecks well before they might appear. Once identified, what can be done to push these boundaries even further and how can we track these improvements?

Previous efforts

Although not found to be included in any automated system, upstream benchmarks are available and track CPU and memory activity of several LAVA subsystems. These were designed for an older version of the LAVA software though, so an update will be necessary prior to including them in the current CI pipelines. You can learn more about them from "Testing a large testing software" talk (starting at 20:10).

In Collabora's lab, a very basic setup for testing LAVA performance was initially in use: a docker-compose instance with imported production database dump and Apache benchmark for mocking traffic to the server combined with pgAdmin for easier PostgreSQL analysis.

 PostgreSQL analysis environment.png
PostgreSQL analysis environment

 

Running benchmarks in this setup helped pinpoint performance issues and patch them: from optimizing branching logic to introducing lazy pagination in the LAVA REST API.

Having further performance-focused development in mind, reducing the possibility of regressions (preferably by running per-patch benchmarks) was necessary. This environment wasn't, however, very comfortable for prolonged use with multiple patches to verify, so some changes to the CI pipeline seemed necessary. The main components were still: loading test data on the server, generating traffic, and a way to compare results between tests.

Downstream CI pipeline

As benchmarks are not a part of any projects' CI yet, enabling them in the Collabora downstream CI pipeline could prove itself useful. This CI pipeline creates a very similar environment to the one described above:

Although this pipeline could be easily extended with a benchmark step, based on previous efforts it would not solve the main concern: guarding upcoming patches with new features from performance regressions. It would still be helpful to ensure features developed by Collabora continuously improve LAVA software performance, but let's aim a bit higher.

Upstream CI pipeline

The main difference between downstream and upstream CI pipelines is that the upstream one doesn't spin up an actual LAVA instance. The whole testing procedure is performed using separately built CI images. These are later used as the environment for running in-tree tests, source code analysis, building packages, and documentation.

Jobs Using CI images.png
Jobs using CI images

 

If someone needs to provide a specifically-crafted environment (e.g. pre-generated database to create a specific load on the server), an additional ci-image is needed. This way the whole testing procedure can be much quicker - excluding database generation time (and even pre-generated database import time!).

Going forward

In the next parts, the following topics will be discussed:

  • How to plug into upstream CI 
  • How to provide a server load that would create a 1:1 database mock (that behaves just like a production one)

Comments (0)


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

WhisperSpeech: Exploring New Horizons in Text-to-Speech Technology

13/09/2023

Text-to-speech (TTS) models are playing a transformative role, from enriching audiobooks to enhancing podcasts and even improving interactions…

Bridging IIO and Input in Linux

21/08/2023

In Linux, the Industrial Input/Output subsystem manages devices like Analog to Digital Converters, Light sensors, accelerometers, etc. On…

Pushing testing laboratory performance limits by benchmarking LAVA - Part 1

10/08/2023

Collabora's main testing laboratory has grown to automate testing on over 150 devices of about 30 different types. The lab receives job…

Persian Rug - It really ties the Rust room together

09/08/2023

Rust is a modern language known for its memory safety, efficiency, and wide range of high-level features. But many beginners also run into…

Triple Threat: The Power of Transcription, Summary, and Translation

03/08/2023

At Collabora, we're committed to bringing people together. That's why we're pushing state-of-the-art machine-learning techniques like Large…

Booting on Radxa's Rock-5B without any media used

31/07/2023

I have been working on getting U-boot upstream up to speed for the Radxa Rock-5B Rockchip RK3588 board. One of the cool features that I…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2023. All rights reserved. Privacy Notice. Sitemap.