We're hiring!
*

GStreamer 1.20: Embedded & WebRTC lead the way

Olivier Crête avatar

Olivier Crête
February 18, 2022

Share this post:

Made available earlier this month, GStreamer 1.20 is the fruitful result of 17 months of hard work from the entire community. Over 250 developers contributed code to make this release happen, and once again, Collabora had more contributors than any other organization.

Our work focused on the two areas in which we believe GStreamer shines the brightest: embedded systems, and network streaming, in particular WebRTC. Below is a summary of the impact our team of engineers had on this latest release.

As usual, you can also learn more about the enhancements done by the rest of the community by looking at the project's 1.20 release notes.

Contributions related to embedded systems

GStreamer is already the pre-eminent media framework for embedded systems, and this is an area where Collabora has been very active over the last release cycle. Here are some of the improvements that we've made.

After many years of efforts by Guillaume, Nicolas, Stéphane, and Aaron, we finally landed the support for sub-frame decoding. This has made it possible to start decoding video frames before the entire frame has been received from the network if the decoder supports this. We've implemented this for JPEG2000 with OpenJPEG, with H.264 with ffmpeg, as well as in gst-omx when using the Allegro extensions present on the Xilinx Zynq UltraScale+ MPSoC EV processors.

In partnership with Huawei, we also improved the GStreamer build system to make it possible to create a library containing only the specific parts of GStreamer used by a particular application or a set of applications. Take a look at this blog post to learn more.

Nicolas added MPEG2 and VP9 Stateless Linux support and contributed to enhancing the VP9 parser. The H264 Stateless Linux decoder also gained support for interlaced video streams, though only for slice-based decoders and not for frame-based decoders since no driver in the mainline Linux kernel supports that.  Nicolas also added support for a rendering delay that allows multiple frames to be queued in a stateless decoder and enhances throughput at the cost of higher latency. He implemented this for the MPEG 2 Video, VP8, and VP9 decoders. He also added support for HEVC decoding to the new "va" plug-in that uses the new GStreamer common decoder implementation to support VA-API-based decoders.

Nicolas also implemented videocodectestsink: a small element that computes the checksum of incoming frames to compare them against a known good reference. This is useful for creating tests that ensure no regression in decoder implementations. He also added the necessary code in GStreamer to react to resolution changes in Video4Linux source. This is primarily relevant if the source is, for example, an HDMI input.

WebRTC and network streaming

For much of the framework’s history, GStreamer’s principal focus has been on streaming media over a network. This is an area in which we've also made several contributions over this cycle.

We've contributed many improvements related to GStreamer's WebRTC stack, which is one of the most complete and flexible independent implementations of the WebRTC protocols. I've worked on GStreamer's WebRTC stack and have added many features. I included support for explicit notification of the end of candidates so that failing connections can be recognized faster. I reworked the WebRTC library API to ensure it is thread-safe by hiding all information behind properties. I also added support for the "priority" to media streams; setting the various priorities now adds the correct DSCP markings making it possible for network administrators to prioritize the traffic accordingly. I significantly improved the WebRTC statistics to expose most statistics that existed somewhere in the GStreamer RTP stack through the convenient WebRTC API, particularly those coming from the RTP jitter buffer.

Jakub has implemented an RTP Header extension making it possible to send colorspace information per frame; this enables GStreamer to share Dynamic HDR content over RTP. The extension we implemented is compatible with the proposal from Google's libwebrtc team. The basic specification for sending Opus over RTP only supports mono and stereo. The Google libwebrtc team has created an extension called "multiopus," making it possible to send multiple stereo Opus streams together to serve more than two channels. Jakub implemented this in GStreamer's Opus RTP payloader and depayloader. We've implemented RFC 6464, an RTP header extension allowing a client to send the server the relative level (volume) of the audio in the packet; this allows the server to prioritize clients who are speaking over others without having to decode all audio. We've also added support for the iSAC codec; it is a legacy audio codec that was open-sourced by Google in libwebrtc a couple of years ago. We've added a plug-in that wraps the reference implementation of the codec, and we've also written an RTP payloader and depayloader to enable GStreamer to send and receive audio encoded with this codec over RTP.

As part of the Hwangsaeul project sponsored by SK Telecom, we've improved the SRT support. Raghavendra added support for authentication, while Jakub added a way for the application to be notified of broken connections and added more options to the URI in a way that is compatible with the SRT demo application.

Other improvements

GStreamer being an incredibly flexible cross-platform framework, we've also made several improvements that fall outside of the two main categories.

Nicolas implemented support for decoding alpha channels in WebM videos. This is a bit special as the alpha channel is carried as a second video stream. He's also added support for decoding those using hardware-accelerated decoders such as V4L2 based decoders.

Aaron added the first element specifically for machine learning to the core GStreamer plug-in collection. It uses the ONNX library to do object detection; we hope to add more elements using the ONNX library in the future.

Aaron also helped Rabindra Harlalka from NICE to contribute upstream elements that can encrypt a stream using AES encryption. This just applies the AES in CBC mode to the incoming stream with the key provided by the application.

Xavier again made numerous improvements to the Meson build system; in particular, he replaced GStreamer's custom pkg-config file generator with one he contributed to Meson itself. This ensures that the generated pkg-config files match the libraries that are in the build system.

I added a "stats" property to the identity element; this makes it easier to instrument pipeline to get statistics for monitoring. I added support for the newer "constrained high" and "progressive high" H.264 profiles to the various GStreamer elements where those are relevant. Those profiles are just a subset of the existing High profile.

Jakub improved the d3d11desktopup plug-in to capture the Windows desktop to DirectX 11 textures. He implemented support to follow dynamic resolution changes of the desktop and also for capture Windows User Account Control (UAC) prompts.

I improved GstAudioAggregator base class used by elements such as the audiomixer and audiointerleave elements, it now emit a QoS message telling the application whenever it drops incoming buffers because they are late.

Stéphane fixed the MXF and Matroska demuxers to seek precisely to a frame; this makes it possible to use them as a source for video editing.

Xavier spent quite some time helping with the merge of the GStreamer repositories into a single one. This was an effort of the whole community, making our CI system more simple and generally makes life easier for GStreamer developers.

As usual, we have also contributed a large number of bug fixes across the board, but we won’t list them all out here.

Looking ahead

Our team of engineers already has a number of contributions planned for the next release. These include a rework of the MPEG PS demuxer for more accurate seeking, improvements to the Wayland support like a GTK3 sink that can take advantage of Wayland's support for hardware video overlays, and support for DRM modifiers to enable higher performance zero-copy between hardware decoders and display.

If you are ready to explore GStreamer 1.20, or have any questions about how to take advantage of its exciting new features to get the maximum performance from your hardware, please do not hesitate to contact us. Collabora's multimedia team is always available to assist you to leverage or implement the latest feature releases of GStreamer.

 

Comments (2)

  1. Jay:
    Feb 19, 2022 at 12:00 AM

    Much appreciated updates. May I also suggest av1 support?

    Reply to this comment

    Reply to this comment

    1. Olivier Crête:
      Feb 19, 2022 at 10:14 PM

      GStreamer has had pretty complete AV1 support for a while!

      Reply to this comment

      Reply to this comment


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


 

Search the newsroom

Latest News & Events

Talks of the town: Software engineering edition

17/05/2022

Less than a day away, May 18th looks to be a very busy time. With Live Embedded Event and Embedded Vision Summit taking place almost simultaneously,…

PipeWire: Bluetooth support status update

29/04/2022

Over the last two years, Bluetooth® audio support has steadily grown in PipeWire and has become a featureful, stable, conformant, open source…

SocketCAN x Kubernetes

27/04/2022

Looking to use hardware-backed and virtual SocketCAN interfaces inside your Kubernetes Pods? A new device plugin now allows processes inside…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2022. All rights reserved. Privacy Notice. Sitemap.