We're hiring!
*

GStreamer CI support for embedded devices

Omar Akkila avatar

Omar Akkila
June 11, 2018

Share this post:

Reading time:

GStreamer is a popular open-source pipeline-based multimedia framework that has been in development since 2001. That’s 17 years of constant development, triaging, bug fixes, feature additions, packaging, and testing. Adopting a Jenkins-based Continuous Integration (CI) setup in August 2013, GStreamer and its dependencies are now built multiple times a day with each commit. Prior to that, the multimedia project used a build bot hosted by Collabora and Fluendo. At the time of this writing, GStreamer is built for the Linux (Fedora & Debian), macOS, Windows, Android, and iOS platforms. A very popular deployment target for GStreamer are embedded devices, but they are not targeted in the current CI setup.This meant additional manpower, effort, and testing outside of the automated tests for every release of GStreamer to validate on embedded boards. To rectify this, a goal was devised to integrate embedded devices into the CI.

Now, this meant more than just emulating embedded targets and building GStreamer for them. The desire is to test on physical boards with as much as automation as possible. This is where the the Linaro Automated Validation Architecture (LAVA) steps into play. LAVA is a continuous integration automation system, similar to Jenkins, oriented towards testing on physical and virtual hardware. Tests can range anywhere between simple boot testing to system-level testing. The plan being that GStreamer CI will interface with LAVA to run the gst-validate test suite on devices.

Architecturally, LAVA operates through a master-worker relationship. The master is responsible for housing the web interface, database of devices, and scheduler. The worker is responsible for receiving messages from the master and dispatching all operations and procedures to the Devices Under Test (DUT). At Collabora, we host a LAVA instance with a master and maintain a lab of physical devices connected to a LAVA worker in our Cambridge office. For the preliminary iteration of embedded support, the aim is to introduce a Raspberry Pi to the GStreamer CI. Collabora’s infrastructure is used as a playground to test and research. The Raspberry Pi is both popular and it offers the complex use-case of creating special builds of GStreamer components due to its design. Conveniently, one of the devices integrated with our worker is a Raspberry Pi 2 Model B - hereafter referred to as ‘RPi’.

GStreamer CI already makes use of Jenkins’ pipeline capabilities for builds. A single master job downstreams multiple jobs such as validation and builds for different architectures. Our embedded projects can be broken down into different phases which can represent different stages in a pipeline. Visually, this looks like:

Continuous Integration (CI) pipeline

In terms of jobs we have:

  1. Build GStreamer for the RPi and upload artifact

  2. Submit and run Lava test job

  3. Wait for publishable results from LAVA

In the first job, GStreamer is crossbuilt for the RPi using its build system, Cerbero, while running in a Docker container, for reproducibility. For this, we require a sysroot, a config file for Cerbero, and a Docker image. The sysroot was generated using Debos, a tool for creating Debian-based OS images. Through a simple YAML file, our sysroot contains all the minimal dependencies, tools, and setup required during build time. Next, the Cerbero config file lists the architecture and platform being targeted, path to the sysroot and its toolchain, output directories, and the option of including variants.

Variants are what Cerbero uses to configure and customize what gets included or excluded from the resulting build. A rpi variant was introduced to assist in modifying the build recipes of two particular GStreamer components: gst-plugins-base and gst-omx. As mentioned earlier, the RPi requires special configuration for GStreamer to be able to properly operate. The device itself contains a Broadcom System on Chip (SoC) and Broadcom firmware that applications must interface with in order to work with components like the GPU (VideoCore IV). This includes Broadcom implementations of APIs such as EGL and OpenGL ES. Both gst-plugins-base and gst-omx link against these libraries and so they have to be configured to use the Broadcom implementations if we are to have proper support.

Finally, Cerbero must be bootstrapped with necessary build tools and dependencies. To save some time, we install them in our Docker image along with our sysroot and config file. With that, GStreamer can be cross-built for the RPi where the resulting build artifact can be uploaded to a image artifact store. As this is a pipeline, the Jenkins job will finish by triggering another Jenkins job to cover the latter parts.

The second job submits a job to LAVA and waits until results are received after LAVA runs gst-validate. When writing a test for LAVA, we often provide two YAML files: One describes the setup of the job itself known as the job definition; the other describes the actual test that will be run on specified devices known as the test definition. The job definition contains information on how long the job should run, timeouts, the list of devices the job can run on, but most importantly, we are required to provide three actions: the deploy, boot, and test actions. If all you need is simple boot testing, the test action can be omitted. The deploy action allows us to specify the method of deployment for the images that the device will use and from which it will boot. The boot action specifies the boot method, boot commands, and login information, if applicable. The test action provides the locations of the repositories containing test definitions to be run. In addition, we can pass arguments as environment variables to our tests. Our job definition arranges deployment, over TFTP, images of the Raspberry Pi kernel, device tree blob, and a rootfs to boot using U-boot and NFS. It looks something like this:

actions:
- deploy:
    timeout:
      minutes: 60
    to: tftp
    kernel:
      url: https://your.domain.com/zImage
      type: zimage
   modules:
     url: https://your.domain.com/modules.tar.xz
     compression: xz
    dtb:
      url: https://your.domain.com/bcm2709-rpi-2-b.dtb
    nfsrootfs:
      url: https://your.domain.com/rootfs.tar.gz
      compression: gz
    os: debian

- boot:
    method: u-boot
    commands: nfs
    prompts: [ 'root@rpi2b:~#', '# ' ]
    timeout:
      minutes: 30

- test:
    timeout:
      minutes: 180
    definitions:
      - repository: https://your.git.host/lava-test-definitions
        from: git
        path: gst-validate/gst-validate.yaml
        name: gst-validate
        params:
          CALLBACK_URL: {{callback_url}}

For Jenkins to submit the LAVA job definition, we use a tool developed at Collabora called LQA (LAVA QA). LQA simplifies the process of interacting with running or completed LAVA jobs, and can submit new jobs. A nice feature is that LQA supports Jinja2 templating when submitting jobs - this helps make our job definitions more flexible. We use this feature to include a callback URL that our job definition passes on into our test definition. The callback URL comes from the Webhook Step Jenkins plugin. This plugin basically creates a URL with a token that we can use to push data through a POST request along with a wait step that, you guessed it, waits to receive on that URL. The job will expect a xUnit file to be pushed as Jenkins has support for reading jUnit test results. Gst-validate is ran by calling gst-validate-launcher which has support for generating a xUnit report. With the report, we gain the benefit of having the results parsed and displayed in a visually readable format. Jenkins can also keep track of test failures between builds which helps us understand if there are any regressions. When translated into a Jenkinsfile, we simply write:

hook = registerWebhook()

// Submit
sh "lqa -c ${LQA_CONFIG} --log-file ${LOG} submit -v -t callback_url:${hook.getURL()} gst-validate.yaml"

// Wait
data = waitForWebhook hook

// Store Results
sh "mkdir -p results"
writeFile(file: "results/gst-validate-testsuites.xml", text: "${data}")

// Parse Results
junit "**/results/*.xml"

Moving on to the LAVA side, after submission, the images specified are deployed to our hooked-up RPi and booted to then run our test definition. The test definition starts by installing any dependencies necessary for our test to execute and then moves on to running a fetch-and-install script. This script fetches the latest build artifact from our image artifact store built in the first Jenkins job and installs it onto the device. In addition, it also fetches the packaged media assets used in the default validation testsuite as to avoid fetching them during the test. After fetching and installing dependencies, we verify the install and move on to running gst-validate. When the test run is complete, we check for our generated xUnit file and push it to the waiting Jenkins job. As stated earlier, the report can be parsed by Jenkins where we can neatly see the results from all tests and detect any regressions.

Presently, the GStreamer project is in the midst of a migration to Gitlab CI from Jenkins. In the meantime, the pipeline still requires much needed polishing and testing. There are a few areas for improvements in efficiency, speed, and correctness:

  • Improve fault tolerance at different stages of the pipeline. Specifically where if something were to go wrong during the LAVA test before gst-validate-launcher can complete then the Jenkins will end up waiting forever.

  • Improve GStreamer build times. The build job runs for an average of one hour and fifteen minutes.

  • Effort to get all tests to pass on the RPi. Currently, between fifteen to twenty percent of tests fail.

With more work, a robust pipeline can be ported to Gitlab CI, merged, and launch the process of adding more embedded devices.


Visit Omar's blog.

Comments (11)

  1. Case Williams:
    Sep 18, 2018 at 09:03 AM

    Hello, this looks great, is it possible to obtain the build artifacts? I see the Jenkins CI is open but there's no link to the resulting RPi builds.

    Reply to this comment

    Reply to this comment

    1. Olivier Crête:
      Sep 18, 2018 at 05:09 PM

      Right now, we're not running this regularly, as we're planning to move GStreamer over to GitLab and GitLab-CI, but once it's really deployed, we definitely plan to make the artifacts available.

      Reply to this comment

      Reply to this comment

      1. Case Williams:
        Sep 18, 2018 at 06:03 PM

        That's great, really looking forward to it! Would it be possible to please open-source the Dockerfile and cerbero config that you used to do the build? I would really like to get a working build and I've spent countless hours now trying to get it working. It would be hugely appreciated, even if it's just a link to a gist or something.

        Reply to this comment

        Reply to this comment

        1. Omar Akkila:
          Sep 18, 2018 at 08:16 PM

          Hey there! This repo contains the Dockerfile and cerbero config I used to crossbuild for the Raspberry Pi: https://gitlab.collabora.com/rakko/crossbuild-gst-rpi

          Feel free to modify to suit your needs. If you need any help, let us know!

          Reply to this comment

          Reply to this comment

          1. Case Williams:
            Sep 19, 2018 at 01:28 PM

            That's amazing, thanks, I will try it now.

            Reply to this comment

            Reply to this comment

          2. Case Williams:
            Sep 19, 2018 at 06:10 PM

            I'm getting a failure trying to build sysroot:

            Running /debos --artifactdir /build /build/raspbian-sysroot.yaml
            2018/09/19 18:05:16 ==== debootstrap ====
            2018/09/19 18:05:17 Debootstrap | I: Retrieving InRelease
            2018/09/19 18:05:18 Debootstrap | I: Checking Release signature
            2018/09/19 18:05:18 Debootstrap | E: Release signed by unknown key (key id 9165938D90FDDD2E)
            2018/09/19 18:05:18 debootstrap.log | gpgv: Signature made Wed Sep 19 16:35:47 2018 UTC
            2018/09/19 18:05:18 debootstrap.log | gpgv: using RSA key 9165938D90FDDD2E
            2018/09/19 18:05:18 debootstrap.log | gpgv: Can't check signature: No public key
            2018/09/19 18:05:18 Action `debootstrap` failed at stage Run, error: exit status 1

            This is in a virtual machine, running Debian Stretch. Installed Docker and KVM.

            Can't figure out the issue - any clues? Or even better is there a sysroot that I can download somewhere?

            Thanks!

            Reply to this comment

            Reply to this comment

        2. Case Williams:
          Sep 20, 2018 at 05:33 PM

          I've been trying to figure out why I keep running into the unknown key problem when building the sysroot with KVM. Is there a workaround? Perhaps I could download the sysroot from somewhere or use the resin-io/raspbian base image in docker? I'm not sure... where did you build it and how did that work out? I am trying with a VM in the cloud (stretch). Any help appreciated :)

          Reply to this comment

          Reply to this comment

  2. Case Williams:
    Oct 11, 2018 at 09:53 AM

    Hi all, any idea if anyone can help with my issue I described above? Perhaps just a clue? Thanks!

    Reply to this comment

    Reply to this comment

    1. Omar Akkila:
      Oct 11, 2018 at 04:56 PM

      Apologies for not addressing your earlier comments! I'll need to have a closer look at what the issue is, but for now to get around the gpg check you can modify the debootstrap action in the yaml file to not perform to check. Basically, if you've cloned the repo, you can modify rasbpian-sysroot.yaml and change the debootstrap action to look like this:

      - action: debootstrap
      suite: stretch
      components:
      - main
      mirror: http://archive.raspbian.org/raspbian
      variant: minbase
      check-gpg: false

      Let me know if you run into other issues and I'll aslo try to come back with a proper fix for this once I have a better look.

      Reply to this comment

      Reply to this comment

  3. Case Williams:
    Nov 29, 2018 at 11:01 AM

    Hi Omar, I successfully created the sysroot using your 1-line change, thanks!

    I found that the patch no longer applies, and several things need to be changed. I can't create a PR to your repo as it's not on Github or another public system. e.g.:

    # this new syntax
    self.append_env('CPPFLAGS', flags)

    #and then needed to add (to the `Recipe`):
    self.configure_options = ''

    The `bootstrap --build-tools-only` command succeeded. The next command (`package -t gstreamer-1.0 -o output/`) failed, however, with:

    Recipe 'glib' failed at the build step 'compile'
    Select an action to proceed:
    [0] Enter the shell
    [1] Rebuild the recipe from scratch
    [2] Rebuild starting from the failed step
    [3] Skip recipe
    [4] Abort

    I didn't know how to continue and fix that, so am stuck there.

    Do you have a set of compiled artifacts for Raspberry Pi? Either just the binaries or the actual .deb files? I am pretty desperate to get my hands on an updated build. I was hoping to upload them to packagecloud (who have given me a free open-source account to do so), so that I can use them in my IOT side-project.

    One thought is that the master branch is now broken and I should be using the same code version that you build against, but I have no idea how to tell Cerbero to select a specific Git revision.

    Please, if you have any built Raspberry Pi artifacts, upload them so I can give them a try :)

    Otherwise, perhaps put the repo on Github so we can collaborate on it? We could link a free build system e.g. Circle 2.0 to the repo, and then work on getting it stable?

    Thanks!

    Reply to this comment

    Reply to this comment

    1. Olivier Crête:
      Dec 12, 2018 at 06:57 PM

      Hi Case,

      Our intention is to merge this into the gst-ci at gitlab.freedesktop.org (so in the official CI for GStreamer), which will create artifacts that you can re-use. Can you maybe attach the log somewhere? You can probably create an issue on gitlab.freedesktop.org/gstreamer/gstreamer for this.

      Reply to this comment

      Reply to this comment


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Automatic regression handling and reporting for the Linux Kernel

14/03/2024

In continuation with our series about Kernel Integration we'll go into more detail about how regression detection, processing, and tracking…

Almost a fully open-source boot chain for Rockchip's RK3588!

21/02/2024

Now included in our Debian images & available via our GitLab, you can build a complete, working BL31 (Boot Loader stage 3.1), and replace…

What's the latest with WirePlumber?

19/02/2024

Back in 2022, after a series of issues were found in its design, I made the call to rework some of WirePlumber's fundamentals in order to…

DRM-CI: A GitLab-CI pipeline for Linux kernel testing

08/02/2024

Continuing our Kernel Integration series, we're excited to introduce DRM-CI, a groundbreaking solution that enables developers to test their…

Persian Rug, Part 4 - The limitations of proxies

23/01/2024

This is the fourth and final part in a series on persian-rug, a Rust crate for interconnected objects. We've touched on the two big limitations:…

How to share code between Vulkan and Gallium

16/01/2024

One of the key high-level challenges of building Mesa drivers these days is figuring out how to best share code between a Vulkan driver…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2024. All rights reserved. Privacy Notice. Sitemap.