We're hiring!
*

ipcpipeline: Splitting a GStreamer pipeline into multiple processes

George Kiagiadakis avatar

George Kiagiadakis
November 17, 2017

Share this post:

Reading time:

Earlier this year I worked on a certain GStreamer plugin that is called “ipcpipeline”. This plugin provides elements that make it possible to interconnect GStreamer pipelines that run in different processes.  In this blog post I am going to explain how this plugin works and the reason why you might want to use it in your application.

Why ipcpipeline?

In GStreamer, pipelines are meant to be built and run inside a single process. Normally one wouldn’t even think about involving multiple processes for a single pipeline. You can (and should) involve multiple threads, of course, which is easily done using the queue element, in order to do parallel processing. But since you can involve multiple threads, why would you want to involve multiple processes as well?

Splitting part of a pipeline to a different process is useful when there is one or more elements that need to be isolated for security reasons. Imagine the case where you have an application that uses a hardware video decoder and therefore has device access privileges. Also imagine that in the same pipeline you have elements that download and parse video content directly from a network server, like most Video On Demand applications would do. Although I don’t mean to say that GStreamer is not secure, it can be a good idea to think ahead and make it as hard as possible for an attacker to take advantage of potential security flaws. In theory, maybe someone could exploit a bug in the container parser by sending it crafted data from a fake server and then take control of other things by exploiting those device access privileges, or cause a system crash. ipcpipeline could help to prevent that.

How does it work?

In the – oversimplified – diagram below we can see how the media pipeline in a video player would look like with GStreamer:

image.YNGV9Y.png

With ipcpipeline, this pipeline can be split into two processes, like this:

image.WCEG9Y.png

As you can see, the split mainly involves 2 elements: ipcpipelinesink, which serves as the sink for the first pipeline, and ipcpipelinesrc, which serves as the source for the second pipeline. These two elements internally talk to each other through a unix pipe or socket, transferring buffers, events, queries and messages over this socket, thus linking the two pipelines together.

This mechanism doesn’t look very special, though. You might be wondering at this point, what is the difference between using ipcpipeline and some other existing mechanism like a pair of fdsink/fdsrc or udpsink/udpsrc or RTP? What is special about these elements is that the two pipelines behave as if they were a single pipeline, with the elements of the second one being part of a GstBin in the first one:

image.9EBV9Y.png

The diagram above illustrates how you can think of a pipeline that uses the ipcpipeline mechanism. As you can see, ipcpipelinesink behaves as a GstBin that contains the whole remote pipeline. This practically means that whenever you change the state of ipcpipelinesink, the remote pipeline’s state changes as well. It also means that all messages, events and queries that make sense are forwarded from one pipeline to the other, trying to implement as closely as possible the behavior that a GstBin would have.

This design practically allows you to modify an existing application to use this split-pipeline mechanism without having to change the pipeline control logic or implement your own IPC for controlling the second pipeline. It is all integrated in the mechanism already.

ipcpipeline follows a master-slave design. The pipeline that controls the state changes of the other pipeline is called the “master”, while the other one is called the “slave”. In the above example, the pipeline that contains the ipcpipelinesink element is the “master”, while the other one is the “slave”. At the moment of writing, the opposite setup is not implemented, so it’s always the downstream part of the pipeline that can be slaved and ipcpipelinesink is always the “master”.

While it is possible to have only one “master” pipeline, it is possible to have multiple “slave” ones. This allows, for example, to split an audio decoder and a video decoder into different processes:

ipcpipeline-1.png

It is also possible to have multiple ipcpipelinesink elements connect to the same slave pipeline. In this case, the slave pipeline will follow the state that is closest to PLAYING between the two states that it will get from the two ipcpipelinesinks. Also, messages from the slave pipeline will only be forwarded through one of the two ipcpipelinesinks, so you will not notice any duplicate messages. Behavior should be exactly the same as in the split slaves scenario.

ipcpipeline-2.png

Where is the code?

ipcpipeline is part of the GStreamer bad plugins set (here). Documentation is included with the code and there are also some examples that you can try out to get familiar with it. Happy hacking!

 

Original post

Comments (33)

  1. jmz:
    Mar 14, 2018 at 06:56 AM

    Thank you for introducing and explaining the ipcpipeline. I am interested in the last scenario. In Process 1, how do you create a slave pipeline consisting of actually two pipelines (one is audio sink and the other is video sink)? Do the audio pipeline and video pipeline run in separate main loops? Does Process 1 consist of two slave pipelines actually?

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Mar 14, 2018 at 01:48 PM

      Hi,
      In process 1, as it is shown in the image, there is only one slave pipeline, with one main loop. What happens in that pipeline is that we add two sources and link them to their respective downstream elements all the way to the sinks. All the elements are still in one slave pipeline, but in two separate chains that don't link to each other.

      Reply to this comment

      Reply to this comment

  2. jakob:
    Aug 17, 2018 at 04:20 PM

    Nice plugin! I was wondering how difficult it would be to have the src part of the plugin be the master? Is this already implemented? How difficult would it be to add?

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Aug 20, 2018 at 03:46 PM

      This is unfortunately not implemented. It is not hard to do, but it's a considerable amount of work. Do you have a specific use case? I would be interested to discuss it.

      Reply to this comment

      Reply to this comment

  3. gcasmer:
    Dec 18, 2018 at 12:25 PM

    GStreamer states a number of reasons why a pluging might be put into 'bad' classification. Do you know which reason that is for your plugin? Is it the fact that don't like ipc for gstreamer, the structure of the code, documentation, or some stability? or maybe some other reason? I have interest in this code and just want to make sure i know what i am getting into if i start using it. I am happy to help solve any of the issues and contribute to the project.

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Dec 18, 2018 at 03:43 PM

      Hi,

      The plugin is in the "bad" repository simply because it is something relatively new and not widely used. It also lacks features, like the ability to do the source part of the pipeline in a slave, and most importantly, it lacks reliable automatic unit tests (the tests are implemented, but they keep showing race conditions on the CI servers...).

      The structure of the code and the idea, they are both sane. I think that with a little bit more testing, adoption and better unit tests, this has the potential of becoming a good plugin.

      Reply to this comment

      Reply to this comment

  4. bell:
    Mar 24, 2020 at 01:38 AM

    I'm wondering the difference between ipcpipelinesrc/sink and shmsrc/sink. the pipeline by ipcpipelinesrc is totally a slave and need to get the state change also from the sink right?

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Mar 24, 2020 at 11:48 AM

      Yes, the major difference is that in ipcpipeline, the pipeline that has the ipcpipelinesrc is completely controlled by the one that has the ipcpipelinesink. State changes, events, etc, are all propagated, making the pipelines behave like one. WIth shmsink/shmsrc you only transfer buffers and the control of each pipeline is independent.

      Reply to this comment

      Reply to this comment

  5. Theo:
    Jul 14, 2020 at 10:11 AM

    Very good plugin !
    I however have some issues using it with live sources. The output lags/stutters, and it seems to loose a lot a frames.
    The only solution a found is to set the "sync" property of the last sink to "FALSE".
    I don't have this issue with videotestsrc, only plugins that use real cameras.
    Do you know where the issue could come from ?
    I'm using the ipcpipeline1.c as reference.

    Reply to this comment

    Reply to this comment

  6. Ali:
    Feb 08, 2021 at 05:28 PM

    Just wondering, if memory is allocated from a specific bufferpool/allocator, will the pipeline work or not ?
    Because, when buffer will be unrefed, we do not have any bufferpool info !

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Feb 10, 2021 at 07:50 AM

      Buffers from a specific allocator should work, but only up to the process boundary. When the buffer leaves one process to enter the other one, it will be copied, losing allocator-specific properties.

      In the future we could also implement zero-copy for ipcpipeline, using buffers that can be shared with a file descriptor, such as memfd or dmabuf. In this scenario, the kernel keeps track of the buffer reference so that it is not lost even when the corresponding GstBuffer / GstMemory in one process is completely unrefed and returns to the pool.

      Reply to this comment

      Reply to this comment

      1. Ali:
        Feb 10, 2021 at 07:34 PM

        Thanks !
        That makes sense. Looking forward for this feature.

        Reply to this comment

        Reply to this comment

      2. Ali:
        Feb 10, 2021 at 07:58 PM

        As we can't send the bufferpool(and custom_allocator) details across the buffer. So, after unref, linking the free fun. of buffer to bufferpool/allocator(even fd based)'s free one, looks challanging.

        Reply to this comment

        Reply to this comment

      3. Linh Nguyen:
        Sep 23, 2022 at 09:59 AM

        Really looking forward for this feature.
        I'm using NVIDIA Jetson platform, the video buffer is created with pool and is DMA buffer. I need to split some heavy and unstable processing into separate processes, so the main video pipeline is more stable.
        Anyway, great works.

        Reply to this comment

        Reply to this comment

        1. George Kiagiadakis:
          Sep 23, 2022 at 02:13 PM

          Hi,

          I'm glad that you like ipcpipeline. However, note that during all these years there have been more developments in this domain and nowadays I would recommend that you also take a look at PipeWire, which can serve as a multimedia bus in your system, allowing buffers to move from any process to any other process, and which has support for DMABuf zero-copy as well.

          Last week I gave a very relevant talk at the Embedded Linux Conference Europe, which I would recommend you to watch. The video is not available yet, but will be in about 3-4 weeks. Keep an eye on our social media accounts to get notified as soon as it is available.

          Reply to this comment

          Reply to this comment

          1. Linh Nguyen:
            Oct 12, 2022 at 09:07 AM

            Thank you so much for pointing out PipeWire. It looks promising and very interesting.

            Found your video 11months ago about about PipeWire, but also looking for the new video in Embedded Linux 2022 conference :).

            Cheers.

            Reply to this comment

            Reply to this comment

            1. George Kiagiadakis:
              Oct 13, 2022 at 07:41 AM

              Hi, the video of this talk was actually released a few days ago. Here it is: https://www.youtube.com/watch?v=fOCwsV4soik

              Reply to this comment

              Reply to this comment

  7. Laurent:
    May 17, 2021 at 09:12 AM

    How is this different from RidgeRun's gstreamer plugin called https://github.com/RidgeRun/gst-interpipe and the accompanying gstreamer daemon https://github.com/RidgeRun/gstd-1.x ?

    Are they solving the same need?
    Is there an equivalent community effort to have what RidgeRun has developed?

    Reply to this comment

    Reply to this comment

    1. George Kiagiadakis:
      May 17, 2021 at 11:47 AM

      Hi,

      There are multiple similar plugins for interconnecting multiple GStreamer pipelines, however, they are all different in purpose and capabilities.
      In the case of the interpipe plugin, interpipe is meant to link multiple GStreamer pipelines in the same process, while ipcpipeline crosses the process boundary. Something closer to interpipe would be the inter and the gstproxy plugins.

      There's a useful cheat sheet with all the pipeline sharing/splitting plugins here: https://github.com/matthew1000/gstreamer-cheat-sheet/blob/master/sharing_and_splitting_pipelines.md
      Allow me also to add PipeWire in this list, which is not a GStreamer plugin or specific to GStreamer in any way, but it is also an efficient multimedia IPC mechanism for the case where you want to cross the process boundary and it can be combined with GStreamer for a complex multi-process application architecture.

      Regarding gstd, this is something different. It's a process like gst-launch that allows you to send control commands to it over the network in order to affect the state of the pipeline.

      I hope this helps to clear things up a bit...

      Reply to this comment

      Reply to this comment

      1. Laurent Denoue:
        May 17, 2021 at 03:53 PM

        Thanks George for the link to cheat sheet.
        I read the article linked to in that cheat http://blog.nirbheek.in/2018/02/decoupling-gstreamer-pipelines.html
        and it looks like GstInterpipe from RidgeRun might be comparable to gst-proxy then?
        I guess gst-proxy is more modern, and is part of gstreamer... Do you know of good examples built with gst-proxy?
        For gst-interpipe, RidgeRun seems to use it in their gstreamer daemon (gstd).

        Laurent

        Reply to this comment

        Reply to this comment

  8. karthck:
    Aug 11, 2021 at 10:15 AM

    can we construct pipeline across 2 laptops ?

    Reply to this comment

    Reply to this comment

    1. George Kiagiadakis:
      Aug 11, 2021 at 05:37 PM

      ipcpipeline is not intended for splitting pipelines across different machines. It may work with a few hacks, but it's probably best to go for another solution, like using GStreamer's RTP, UDP or TCP elements.

      Reply to this comment

      Reply to this comment

  9. Draden:
    Feb 20, 2023 at 05:08 PM

    Hello,

    I have two pipelines linked by shmsink and shmsrc in two separate dockers.
    It works fine but I am losing messages from the master pipeline that I want to retrieve in the slave pipeline. Can ipcpipelinesrc/sink plugins work in two separate dockers? I can't seem to share the socketpair between the two dockers ?

    Reply to this comment

    Reply to this comment

    1. George Kiagiadakis:
      Feb 21, 2023 at 03:15 PM

      Hi,

      We have not tested this before with docker, but I believe it should be possible to create externally 2 sockets and pass them to the 2 containers. IIRC, they do not necessarily need to be created with socketpair(), so you can do the same thing that you are doing with shmsink/src.

      Note that if you use ipcpipeline, then the master will be in control of both pipelines and the messages from the slave will be forwarded to the master, but not vice versa. This may not be what you want.

      Reply to this comment

      Reply to this comment

  10. Anand Sivaram:
    Jun 28, 2023 at 12:04 PM

    Can we share shm between multiple processes?
    One producer application using shmsink and more than one consumer applications using shmsrc.
    Context:
    Producer Process: There is a camera with v4l2src and pulsesrc. After encoding H264 and AAC the outputs are written to 2 shared memories using shmsink say /tmp/shm_audio and /tmp/shm_video.
    Consumer1 Process: Do two shmsrc for both audio and video and then mux to MP4 file.
    Consumer2 Process: On the same shared memories two shmsrc, then create MPEG TS for streaming

    Reply to this comment

    Reply to this comment

    1. George Kiagiadakis:
      Jul 07, 2023 at 10:18 PM

      Yes, you can use the shmsink & shmsrc elements to share memory with other gstreamer processes and pipe the encoded media in the way you describe. There may be some caveats, but the idea is sane and the functionality is available. This has nothing whatsoever to do with ipcpipeline, though. ipcpipeline elements are different from shmsink/shmsrc and they serve a different purpose. Matthew's cheat sheet is useful to understand the differences here: https://github.com/matthew1000/gstreamer-cheat-sheet/blob/master/sharing_and_splitting_pipelines.md

      Reply to this comment

      Reply to this comment

      1. Anand Sivaram:
        Jul 12, 2023 at 08:50 AM

        Whenever I use the raw I420 video frame through shmsink shmsrc, then it is working as explained by Matthew.
        But, when I use H.264 RTP packets and tried shmsink and shmsrc it is giving error.

        Reply to this comment

        Reply to this comment

      2. Anand Sivaram:
        Jul 12, 2023 at 08:51 AM

        This is the pipeline I used.
        gst-launch-1.0 videotestsrc pattern=0 ! capsfilter caps=video/x-raw,format=I420,width=640,height=360,framerate=30/1 ! videoscale ! videorate ! videoconvert ! timeoverlay ! \
        x264enc key-int-max=30 ! capsfilter caps=video/x-h264,stream-format=byte-stream ! rtph264pay pt=96 ! \
        shmsink socket-path=/tmp/gstshm sync=true wait-for-connection=false

        gst-launch-1.0 shmsrc socket-path=/tmp/gstshm ! \
        capsfilter caps=application/x-rtp, media=(string)video, clock-rate=(int)90000 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink

        Reply to this comment

        Reply to this comment

        1. Olivier Crête:
          Jul 12, 2023 at 02:47 PM

          The RTP elements need a segment in the time format, the easiest way to do this is to add the rtpjitterbuffer element, which will do the conversion for you.

          gst-launch-1.0 shmsrc socket-path=/tmp/gstshm ! 'application/x-rtp, media=(string)video, clock-rate=(int)90000' ! rtpjitterbuffer ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink

          Reply to this comment

          Reply to this comment

          1. Anand Sivaram:
            Jul 12, 2023 at 03:06 PM

            Thank you very much for a quick response. Recently I found shmsrc do-timestamp=true but it was printing some warning, but rtpjitterbuffer element looks the real answer.

            Reply to this comment

            Reply to this comment

  11. Sergio Rodriguez:
    Jun 29, 2023 at 11:22 AM

    Can we get an example of how would you connect two sockets from different programs?

    I am trying to use this plugin connecting two sockets via AF_UNIX (and I have tried AF_INET sockets too) and it seems that Gstreamer Plugin does not recognize that sockets to send the pipeline on them.
    It is strange because I can send messages over the socket from the server and the client is receiving them and viceversa.

    Reply to this comment

    Reply to this comment

    1. George Kiagiadakis:
      Jul 07, 2023 at 10:31 PM

      In the upstream examples [https://gitlab.freedesktop.org/gstreamer/gstreamer/-/tree/main/subprojects/gst-plugins-bad/tests/examples/ipcpipeline] we use an AF_UNIX socketpair. If you want to create the two ends of the socket separately, you can do this by binding the unix socket to the filesystem and connecting to it from the other side. It's just regular AF_UNIX socket programming. Take a look at these examples and see if you can match them to what you are doing in your code.

      Reply to this comment

      Reply to this comment


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Automatic regression handling and reporting for the Linux Kernel

14/03/2024

In continuation with our series about Kernel Integration we'll go into more detail about how regression detection, processing, and tracking…

Almost a fully open-source boot chain for Rockchip's RK3588!

21/02/2024

Now included in our Debian images & available via our GitLab, you can build a complete, working BL31 (Boot Loader stage 3.1), and replace…

What's the latest with WirePlumber?

19/02/2024

Back in 2022, after a series of issues were found in its design, I made the call to rework some of WirePlumber's fundamentals in order to…

DRM-CI: A GitLab-CI pipeline for Linux kernel testing

08/02/2024

Continuing our Kernel Integration series, we're excited to introduce DRM-CI, a groundbreaking solution that enables developers to test their…

Persian Rug, Part 4 - The limitations of proxies

23/01/2024

This is the fourth and final part in a series on persian-rug, a Rust crate for interconnected objects. We've touched on the two big limitations:…

How to share code between Vulkan and Gallium

16/01/2024

One of the key high-level challenges of building Mesa drivers these days is figuring out how to best share code between a Vulkan driver…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2024. All rights reserved. Privacy Notice. Sitemap.