We're hiring!

Bifrost meets GNOME: Onward & upward to zero graphics blobs

Alyssa Rosenzweig avatar

Alyssa Rosenzweig
June 05, 2020

Share this post:

Bifrost встречается с GNOME: Вперед и вверх до нуля

In our last blog update for Panfrost, the free and open-source graphics driver for modern Mali GPUs, we announced initial support for the Bifrost architecture. We have since extended this support to all major features of OpenGL ES 2.0 and even some features of desktop OpenGL 2.1. With only free software, a Mali G31 chip can now run Wayland compositors with zero-copy graphics, including GNOME 3. We can run every scene in glmark2-es2, and 3D games like Neverball can be played. In addition, we can support hardware-accelerated video players mpv and Kodi. Screenshots above are from a Mali G31 board running Panfrost.

All of the above is included in upstream Mesa with no out-of-tree patches required, with the upcoming Bifrost support enabled via the PAN_MESA_DEBUG=bifrost environmental variable.

Collabora - GNOME Shell
Collabora - Neverball

New opcodes

Bringing up these new applications required implementing many new floating-point arithmetic opcodes, including comparisons, selections, and additional type conversions. Further, I’ve added initial support for integer arithmetic and bitwise operations, used to implement integer types directly as well as booleans. While there are a number of arithmetic logic unit (ALU) opcodes required, this is not an obstacle on architectures with regular instruction encodings.

Unfortunately, Bifrost is not a regular architecture and has dozens of distinct instruction encodings in order to conserve space. Adding opcodes to the compiler is still routine, but requires adding quite a bit more code. Plus, the duplication can be error-prone, so as soon as I add a new opcode, I add comprehensive tests against the real hardware iterating through different combinations of operand size and modifiers to exercise all the packing special cases.

The upshot is that the testing coverage eliminates entire classes of compiler bugs which tend to plague new drivers, allowing our open source Bifrost driver to flourish despite such a quirky architecture.

Beyond new ALU opcodes, I extended the texture support to enable simple texture operations from vertex shaders, a pattern occurring in glmark2’s terrain scene. Mali GPUs use slightly different encodings for fragment and vertex texture operations, since fragment shaders can automatically compute the level-of-detail parameter based on neighboring fragments, whereas there is no notion of neighboring fragments in vertex shaders.

Finally, I added initial control flow support (branching) support for if/else statements and loops. As Bifrost is a Single Instruction, Multiple Thread (SIMT) architecture in which multiple threads run the same shader in lockstep, branching is a complicated affair if threads diverge. Most of the complexity is handled in hardware, but just enough seeps through that the branching implementation ends up a hair more complicated than that of Midgard. Still, it’s enough for glmark2’s loop scene, and there’s always room for improvement.

A simpler IR

Of couse, Bifrost progress is no obstacle to improving our Midgard support. Inspired by the lessons learned designing the Bifrost Intermediate Representation as previously blogged, I revisited our Midgard Intermediate Representation as well. The focus was two fold:

  • Simplify to enable faster, more effective optimizations in fewer lines of code.

  • Generalize the IR to support non-32-bit operation.

To do so, I implemented generic helpers for inferring instruction modifiers like saturation. Consider a shader that squares a variable and saturates it to the range [0, 1].

X = clamp(X * X, 0.0, 1.0);

In NIR, Mesa’s common intermediate representation used across drivers, this line might look like the following, using NIR’s fsat opcode to clamp to [0, 1]:

ssa_10 = fmul ssa_9, ssa_9
ssa_11 = fsat ssa_10

Our hardware has native support for saturating the results of floating-point instructions. There are a few approaches to take advantage of this. One is to use NIR’s builtin saturation handling, as Midgard’s compiler used to. A NIR pass can fuse the fsat instruction into the multiply, producing the NIR:

ssa_10 = fmul.sat ssa_9, ssa_9

Then our backend compiler can use the .sat flag directly. While this is an easy approach, it is inflexible, since the hardware might be able to use modifiers that NIR does not express. For instance, Mali GPUs have a .clamp_positive operation which does max(x, 0.0) on the result for free. If we wrote X = max(X * X, 0.0), NIR could give us code using a dedicated fclamp_positive instruction:

ssa_10 = fmul ssa_9, ssa_9
ssa_11 = fclamp_positive ssa_10

However, it could not fuse the modifier in without substantial changes affecting common code. The second approach would be to compile this to two instructions in the IR, and use a second propagation pass on our backend IR to fuse it together.

10 = fmul 9, 9                        10 = fmul.pos 9, 9
11 = fclamp_positive ssa_10

However, there’s a third option unifying both cases and simplifying the compiler: inferring the modifiers generically while translating NIR into our backend IR. This enables us to use architecture-specific modifiers, like .pos, while still having the original NIR available for efficient handling. This approach enabled us to replace hundreds of lines of optimizations for floating-point modifiers and bitwise inverses, while optimizing new patterns that the original design could not, promising savings in code complexity and performance improvements. Since it’s generic, it allows us to optimize not just Midgard programs, but soon Bifrost modifiers as well.

Midgard FP16

With a simpler compiler, I was able to add 16-bit support to the Midgard compiler to reduce register pressure and improve thread count (occupancy) due to the architecture’s register sharing mechanism. As previously blogged, our Bifrost compiler is built to support this from day 1, and through the lessons learned there, I was able to backport the improvements to Midgard.

To prepare, I added types into the IR to avoid compiler passes requiring type inference, a complex and error prone pursuit. Once type sizes were preserved cleanly, I added additional support to the Midgard compiler’s packing routines to handle some outstanding details of 16-bit instructions. Midgard is significantly simpler to pack than Bifrost; whereas 16-bit and 32-bit instructions on Bifrost involve separate instructions with dramatically differing opcodes and formats, Midgard has a one-size-fits-most approach which – despite its inherent limitations – is refreshing. Miscellaneous fixes were needed across the compiler; nevertheless, the simplified IR lived up to its design and is now able to support 16-bit operations.

The bulk of the code required for FP16 has now landed in upstream Mesa but is disabled by default pending further testing. Nevertheless, for the adventurous among you, you can set PAN_MESA_DEBUG=fp16 on a recent build of master. Beware: here be dragons.

Colour masks

Stepping away from the compiler, an interesting improvement is the new handling of draws with colour masked out. A typical draw in OpenGL that does not use blending or colour masks might look like:

glColorMask(true, true, true, true);
glDrawArrays(GL_TRIANGLES, 0, 15);

Since blending is disabled and all colour channels (RGBA) are written simultaneously, this draw does not need to read from the colour buffer (tilebuffer). But what if the draw does not write to any colour channels?

glColorMask(false, false, false, false);
glDrawArrays(GL_TRIANGLES, 0, 15);

Naively, the GPU would need to read the previous colour and write it back immediately - but that’s wasteful. Instead, we can detect the case where no colour is written, and elide all access to the colour buffer, skipping both the read and the write.

Could we skip the draw entirely? If there are no side effects, we can, but applications typically mask out colour while also unmasking the depth buffer, which is independent of the colour computation. Midgard has a solution.

Even if depth/stencil updates are required, as long as the shader only computes colour with no side effects, there’s no reason to run the shader. While Bifrost does not appear to, Midgard allows the driver to specify a draw with no shader, saving not only colour buffer read/write but also shader execution.

Community contributions

In addition to our work on Midgard performance, community Panfrost hacker Icecream95 has been improving the Midgard stack nonstop.

Since our last blog post, they contributed a major bug fix for handling discard instructions. For background, OpenGL conceptually first runs the fragment shader for each pixel on the screen and then performs depth testing. In practice, modern hardware attempts to perform depth tests before running the shader, known as “early-z” testing, in order to avoid needlessly executing the shader for occluded pixels.

However, games use discard, an OpenGL directive allowing shaders to eliminate fragments, which can interfere with optimizations like early-z. The driver is responsible for detecting these situations, disabling these optimizations, and enabling standards-compliant fallback paths including “late-z” testing. After Icecream95 investigated issues with Panfrost’s handling of depth testing in the presence of discard instructions, they were able to fix rendering bugs in many games including SuperTuxKart, OpenMW, and RVGL.

On the performance front, in the past they have significantly optimized Panfrost’s tiling routines and Mesa’s min/max index calculation, and added support for ASTC and ETC compressed textures.

Some Panfrost (Mali T760) screenshots of games improved by Icecream95’s patches:

Hats off to a great community contributor!

Performance counters

One final area that we’ve been working on is exposing Mali’s performance counters to userspace in Panfrost, allowing us to identify bottlenecks in the driver and other developers to identify bottlenecks in their application running on Panfrost. For about a year, we have had experimental support for passing the raw counters from kernelspace. Collaborans Antonio Caggiano and Rohan Garg, in conjunction with Icecream95 and other contributors, have been working on integrating these counters with Perfetto to enable high-level analysis with an elegant, free software user interface.

Perfetto with Panfrost on Mali T760

Looking ahead

In the past 3 months since we began work on Bifrost, fellow Collaboran Tomeu Vizoso and I have progressed from stubbing out the new compiler and command stream in March to running real programs by May. Driven by a reverse-engineering effort in tandem with the free software community, we are confident that against proprietary blobs and downstream hacks, open-source software will prevail.

Looking to the future, we plan to improve Bifrost’s coverage of OpenGL ES 2.0 to support more 3D games, now that the basic accelerated desktop is working. We also plan to improve Bifrost compiler performance, in order to approach the proprietary stack’s performance as we did for Midgard. Most of all, we’d like to build a community around the driver, with software freedom and an open first approach as core values.

It worked for Freedreno, Etnaviv, and Lima. It worked for Panfrost on Midgard. And I’m confident it will work again on Bifrost.

Happy hacking.

Comments (26)

  1. deuteragenie:
    Jun 05, 2020 at 08:46 PM

    Congratulations to you and everybody who is contributing to this effort !

    Question : is there a plan to improve the scheduler in panfrost to be "state-of-the-art" ? As scheduling is both an art and a science, is there a way to make the scheduling architecture pluggable or at least easily replaceable , so that different approaches could be tried ?

    Reply to this comment

    Reply to this comment

    1. Alyssa Rosenzweig:
      Jun 05, 2020 at 09:05 PM

      Both the instruction scheduler in Mesa and job scheduler in the kernel are prime candidates for optimization, which we're always looking to improve. Thank you for reading!

      Reply to this comment

      Reply to this comment

  2. Maor:
    Jun 06, 2020 at 08:38 AM

    Alyssa, you and the rest of the team are doing an amazing job. Thank you!

    I have a question but i'm not an expert in this so the answer might be obvious.
    I saw some videos of people running linux using an arm chip(the rk3399 for example) and the video-playing capability
    Was pretty bad. Does Panfrost have support for hardware acceleration/video encoding/decoding?

    Reply to this comment

    Reply to this comment

    1. sre:
      Jun 08, 2020 at 04:54 PM

      Hi Maor,

      In ARM SoCs, hardware acceleration for video de/encoding is usually not performed by the GPU. Instead there are separate hardware blocks (IP cores) just for this task. The Rockchip rk3399 is no exception. The kernel's staging area has drivers available: CONFIG_VIDEO_ROCKCHIP_VDEC for VP9/H264/H265 codecs and CONFIG_VIDEO_HANTRO_ROCKCHIP for MPG2/VP8/H264 (rk3399 has two different IP). Note, that the drivers are still WIP. While not covered by a dedicated blog post so far, you can find some news about those drivers in our kernel blog posts.

      -- Sebastian

      Reply to this comment

      Reply to this comment

      1. Maor:
        Jun 08, 2020 at 08:37 PM

        Hi Sebastian,

        Thank you very much for the detailed explanation!

        Reply to this comment

        Reply to this comment

  3. jimmij:
    Jun 07, 2020 at 04:28 PM

    Congratulations! Thank you for making such a tremendous contribution to the free and open source community.

    Reply to this comment

    Reply to this comment

  4. LP:
    Jun 08, 2020 at 09:46 AM

    Great progress!

    What machine(s) are you testing/running this on?
    ASUS C101/C201? Is there anything with better specs available (in laptop or tablet form factor)?

    Reply to this comment

    Reply to this comment

    1. Alyssa Rosenzweig:
      Jun 08, 2020 at 07:47 PM

      Personally, I use a Samsung Chromebook Plus for Midgard (Mali T860) development, and the screenshots for Bifrost are from an ODROID GO Advance. Other developers like using RK3399-based single-board computers.

      Reply to this comment

      Reply to this comment

      1. Alexander Stein:
        Jun 10, 2020 at 10:15 PM

        This sounds really great. So you used a G31 for bifrost GPU. I would like to try/test and maybe even hack myself on a G52 (ODROID-N2). AFAICS the current mainline kernel support in panfrost is only Mali-Txxx. What did you have to change in order to use the panfrost kernel driver on the G31? Could please you share this or have even a public repository?

        Reply to this comment

        Reply to this comment

        1. Alyssa Rosenzweig:
          Jun 12, 2020 at 02:36 PM


          While Midgard and Bifrost have drastically different instruction sets requiring separate compilers, the interface exposed to the kernel is quite similar, so we've largely been able to reuse the already mainlined code with just a few Bifrost specific patches (https://gitlab.freedesktop.org/tomeu/linux/-/commits/panfrost-odroid-n2/ and https://gitlab.freedesktop.org/tomeu/linux/-/commits/panfrost-go-advance are WIP branches). Unfortunately, the Mali G52 on Amlogic boards still needs a few more magic kernel bits to work (hence my focus on the G31 used in Rockchip), but ironing out those bugs so you can use it on your baord is a top priority: stay tuned!

          Thank you for reading.


          Reply to this comment

          Reply to this comment

  5. Michal Lazo:
    Jun 15, 2020 at 03:17 PM

    I have Odroid C4 SBC
    And I have armbian build of ubuntu 20.04 with
    mesa master (https://launchpad.net/~oibaf/+archive/ubuntu/graphics-drivers)
    I also added PAN_MESA_DEBUG=bifrost to /etc/environment
    And it looks like ubuntu desktop is wokring
    There are some glitches with ubuntu settings (gtk)
    but glmark2-es-wayland is working

    Nice job!!!

    Reply to this comment

    Reply to this comment

      1. Michal Lazo:
        Jun 16, 2020 at 07:47 AM

        Are there any chance to stabilize C51 ?
        gnome is "running" then when I start glmark it will crash in one benchmark
        my experience from Odroid C4 (with mali C31)
        ubuntu 20.04 gnome is running fine.
        I think it will need some optimization :)
        Nice work!

        Reply to this comment

        Reply to this comment

        1. Alyssa Rosenzweig:
          Jun 17, 2020 at 01:21 AM

          Our current focus has been Mali G31, but improvements for G52 are in the pipes, as of course are optimizations!

          Reply to this comment

          Reply to this comment

  6. Eric:
    Jun 24, 2020 at 02:58 AM

    I've used the blob driver directly to DRM to get 3D accelerated drawing without a windowing system.
    Is it possible to do this with Bifrost?

    Reply to this comment

    Reply to this comment

    1. Alyssa Rosenzweig:
      Jun 29, 2020 at 04:41 PM

      Yes, via DRM/GBM. In fact, this is how the compositors themselves (Weston, for instance) are accelerated.

      Reply to this comment

      Reply to this comment

      1. Eric H:
        Jun 29, 2020 at 05:51 PM

        That's great. Now I need to figure out why the old code that uses DRM fails under the new driver.

        Reply to this comment

        Reply to this comment

        1. Michal Lazo:
          Jun 30, 2020 at 07:28 AM

          I think that it will be same as many other sw
          I fixed mutter for lima and panfrost.
          many sw don't expect gpu device as first in /dev/dri/
          and vpu as second

          Reply to this comment

          Reply to this comment

  7. Andy:
    Jul 01, 2020 at 05:15 PM

    Alyssa, thank you so much for your hard work and dedication!!!

    I have a 'X96 Max+' TV box (Amlogic S905X3 with G31) and yesterday I was able to get Panfrost running on it for the first time using Armbian with kernel 5.7.6 and Gnome desktop backed by Wayland!

    'glmark2-es2-wayland' is working well on the box but the desktop gets more and more blurry over time, especially text.

    One question:
    Do I have to start Supertuxkart and Neverball from terminal with some special commands? When I try to run them using the respective icons they crash and send me back to login.

    Another question:
    Is it already possible to activate Panfrost (on G31) using a lighter desktop environment like Xfce (which is the default of Armbian) or Lxde? I did not manage to do so. 'glxinfo -B' always shows that llvmpipe is doing the rendering.

    Thanks again for your brilliant work!

    Reply to this comment

    Reply to this comment

    1. Alyssa Rosenzweig:
      Jul 02, 2020 at 05:25 PM

      Thank you for reading. Mali G31 support, while making fast progress, still has some bugs to work out. For Neverball, try PAN_MESA_DEBUG=bifrost neverball. Stay tuned for Supertuxkart and X11 support!

      Reply to this comment

      Reply to this comment

      1. Andy:
        Jul 03, 2020 at 08:38 AM

        Thanks for your reply.
        I am excited like a small kid to finally see Panfrost working on Bifrost GPUs and am very much looking forward for every small evolution of the driver - running Supertuxkart and X would be a dream :-)

        Hope to sound not too impatient. You are doing an amazing job!

        I would really like to give back something myself but I am not that good at programming. Could only do some testing and give feedback if that would help.

        Reply to this comment

        Reply to this comment

  8. hackan:
    Jul 09, 2020 at 12:24 AM

    First of all - Awesome work!

    Not sure if this is the right place for this, but I'll give it a go anyway:

    I got a Pinebook Pro which is running Gnome on Manjaro and I cannot change the screen color temperature. i can flip on "night light" (or whatever the feature is called in English) but it doesn't have any effect on the actual screen temperature. Has this something to do with the panfrost driver?

    Reply to this comment

    Reply to this comment

    1. Alyssa Rosenzweig:
      Jul 09, 2020 at 03:06 PM

      Thank you! That sounds like a display driver issue, I don't think it's related to Panfrost.

      Reply to this comment

      Reply to this comment

Add a Comment

Allowed tags: <b><i><br>Add a new comment:

Search the newsroom

Latest Blog Posts

Integrating libcamera into PipeWire


PipeWire continues to evolve with the recent integration of libcamera, a library to support complex cameras. In this blog post, I'll explain…

Pushing pixels to your Chromebook


A high-level introduction of the Linux graphics stack, how it is used within ChromeOS, and the work done to improve software rendering (while…

Using the Linux kernel's Case-insensitive feature in Ext4


Last year, a (controversial) feature was added to the Linux kernel to support optimized case-insensitive file name lookups in the Ext4 filesystem.…

Panfrost performance counters with Perfetto


We have now integrated Mali GPU hardware counters supported by Panfrost with Perfetto's tracing SDK, unlocking all-in-one graphics-aware…

Paving the way for high bitrate video streaming with GStreamer's RTP elements


Key performance improvements and fixes to GStreamer's RTP stack have landed in GStreamer 1.18, due in the coming months. The latest enhancements…

Understanding computer vision & AI, part 1


Following our recent presentation at OSSummit, many showed interest in learning more about solving real-world problems with computer vision.…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2020. All rights reserved. Privacy Notice. Sitemap.