We're hiring!

ipcpipeline: Splitting a GStreamer pipeline into multiple processes

George Kiagiadakis avatar

George Kiagiadakis
November 17, 2017

Share this post:

Earlier this year I worked on a certain GStreamer plugin that is called “ipcpipeline”. This plugin provides elements that make it possible to interconnect GStreamer pipelines that run in different processes.  In this blog post I am going to explain how this plugin works and the reason why you might want to use it in your application.

Why ipcpipeline?

In GStreamer, pipelines are meant to be built and run inside a single process. Normally one wouldn’t even think about involving multiple processes for a single pipeline. You can (and should) involve multiple threads, of course, which is easily done using the queue element, in order to do parallel processing. But since you can involve multiple threads, why would you want to involve multiple processes as well?

Splitting part of a pipeline to a different process is useful when there is one or more elements that need to be isolated for security reasons. Imagine the case where you have an application that uses a hardware video decoder and therefore has device access privileges. Also imagine that in the same pipeline you have elements that download and parse video content directly from a network server, like most Video On Demand applications would do. Although I don’t mean to say that GStreamer is not secure, it can be a good idea to think ahead and make it as hard as possible for an attacker to take advantage of potential security flaws. In theory, maybe someone could exploit a bug in the container parser by sending it crafted data from a fake server and then take control of other things by exploiting those device access privileges, or cause a system crash. ipcpipeline could help to prevent that.

How does it work?

In the – oversimplified – diagram below we can see how the media pipeline in a video player would look like with GStreamer:


With ipcpipeline, this pipeline can be split into two processes, like this:


As you can see, the split mainly involves 2 elements: ipcpipelinesink, which serves as the sink for the first pipeline, and ipcpipelinesrc, which serves as the source for the second pipeline. These two elements internally talk to each other through a unix pipe or socket, transferring buffers, events, queries and messages over this socket, thus linking the two pipelines together.

This mechanism doesn’t look very special, though. You might be wondering at this point, what is the difference between using ipcpipeline and some other existing mechanism like a pair of fdsink/fdsrc or udpsink/udpsrc or RTP? What is special about these elements is that the two pipelines behave as if they were a single pipeline, with the elements of the second one being part of a GstBin in the first one:


The diagram above illustrates how you can think of a pipeline that uses the ipcpipeline mechanism. As you can see, ipcpipelinesink behaves as a GstBin that contains the whole remote pipeline. This practically means that whenever you change the state of ipcpipelinesink, the remote pipeline’s state changes as well. It also means that all messages, events and queries that make sense are forwarded from one pipeline to the other, trying to implement as closely as possible the behavior that a GstBin would have.

This design practically allows you to modify an existing application to use this split-pipeline mechanism without having to change the pipeline control logic or implement your own IPC for controlling the second pipeline. It is all integrated in the mechanism already.

ipcpipeline follows a master-slave design. The pipeline that controls the state changes of the other pipeline is called the “master”, while the other one is called the “slave”. In the above example, the pipeline that contains the ipcpipelinesink element is the “master”, while the other one is the “slave”. At the moment of writing, the opposite setup is not implemented, so it’s always the downstream part of the pipeline that can be slaved and ipcpipelinesink is always the “master”.

While it is possible to have only one “master” pipeline, it is possible to have multiple “slave” ones. This allows, for example, to split an audio decoder and a video decoder into different processes:


It is also possible to have multiple ipcpipelinesink elements connect to the same slave pipeline. In this case, the slave pipeline will follow the state that is closest to PLAYING between the two states that it will get from the two ipcpipelinesinks. Also, messages from the slave pipeline will only be forwarded through one of the two ipcpipelinesinks, so you will not notice any duplicate messages. Behavior should be exactly the same as in the split slaves scenario.


Where is the code?

ipcpipeline is part of the GStreamer bad plugins set (here). Documentation is included with the code and there are also some examples that you can try out to get familiar with it. Happy hacking!


Original post

Comments (18)

  1. jmz:
    Mar 14, 2018 at 06:56 AM

    Thank you for introducing and explaining the ipcpipeline. I am interested in the last scenario. In Process 1, how do you create a slave pipeline consisting of actually two pipelines (one is audio sink and the other is video sink)? Do the audio pipeline and video pipeline run in separate main loops? Does Process 1 consist of two slave pipelines actually?

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Mar 14, 2018 at 01:48 PM

      In process 1, as it is shown in the image, there is only one slave pipeline, with one main loop. What happens in that pipeline is that we add two sources and link them to their respective downstream elements all the way to the sinks. All the elements are still in one slave pipeline, but in two separate chains that don't link to each other.

      Reply to this comment

      Reply to this comment

  2. jakob:
    Aug 17, 2018 at 04:20 PM

    Nice plugin! I was wondering how difficult it would be to have the src part of the plugin be the master? Is this already implemented? How difficult would it be to add?

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Aug 20, 2018 at 03:46 PM

      This is unfortunately not implemented. It is not hard to do, but it's a considerable amount of work. Do you have a specific use case? I would be interested to discuss it.

      Reply to this comment

      Reply to this comment

  3. gcasmer:
    Dec 18, 2018 at 12:25 PM

    GStreamer states a number of reasons why a pluging might be put into 'bad' classification. Do you know which reason that is for your plugin? Is it the fact that don't like ipc for gstreamer, the structure of the code, documentation, or some stability? or maybe some other reason? I have interest in this code and just want to make sure i know what i am getting into if i start using it. I am happy to help solve any of the issues and contribute to the project.

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Dec 18, 2018 at 03:43 PM


      The plugin is in the "bad" repository simply because it is something relatively new and not widely used. It also lacks features, like the ability to do the source part of the pipeline in a slave, and most importantly, it lacks reliable automatic unit tests (the tests are implemented, but they keep showing race conditions on the CI servers...).

      The structure of the code and the idea, they are both sane. I think that with a little bit more testing, adoption and better unit tests, this has the potential of becoming a good plugin.

      Reply to this comment

      Reply to this comment

  4. bell:
    Mar 24, 2020 at 01:38 AM

    I'm wondering the difference between ipcpipelinesrc/sink and shmsrc/sink. the pipeline by ipcpipelinesrc is totally a slave and need to get the state change also from the sink right?

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Mar 24, 2020 at 11:48 AM

      Yes, the major difference is that in ipcpipeline, the pipeline that has the ipcpipelinesrc is completely controlled by the one that has the ipcpipelinesink. State changes, events, etc, are all propagated, making the pipelines behave like one. WIth shmsink/shmsrc you only transfer buffers and the control of each pipeline is independent.

      Reply to this comment

      Reply to this comment

  5. Theo:
    Jul 14, 2020 at 10:11 AM

    Very good plugin !
    I however have some issues using it with live sources. The output lags/stutters, and it seems to loose a lot a frames.
    The only solution a found is to set the "sync" property of the last sink to "FALSE".
    I don't have this issue with videotestsrc, only plugins that use real cameras.
    Do you know where the issue could come from ?
    I'm using the ipcpipeline1.c as reference.

    Reply to this comment

    Reply to this comment

  6. Ali:
    Feb 08, 2021 at 05:28 PM

    Just wondering, if memory is allocated from a specific bufferpool/allocator, will the pipeline work or not ?
    Because, when buffer will be unrefed, we do not have any bufferpool info !

    Reply to this comment

    Reply to this comment

    1. gkiagia:
      Feb 10, 2021 at 07:50 AM

      Buffers from a specific allocator should work, but only up to the process boundary. When the buffer leaves one process to enter the other one, it will be copied, losing allocator-specific properties.

      In the future we could also implement zero-copy for ipcpipeline, using buffers that can be shared with a file descriptor, such as memfd or dmabuf. In this scenario, the kernel keeps track of the buffer reference so that it is not lost even when the corresponding GstBuffer / GstMemory in one process is completely unrefed and returns to the pool.

      Reply to this comment

      Reply to this comment

      1. Ali:
        Feb 10, 2021 at 07:34 PM

        Thanks !
        That makes sense. Looking forward for this feature.

        Reply to this comment

        Reply to this comment

      2. Ali:
        Feb 10, 2021 at 07:58 PM

        As we can't send the bufferpool(and custom_allocator) details across the buffer. So, after unref, linking the free fun. of buffer to bufferpool/allocator(even fd based)'s free one, looks challanging.

        Reply to this comment

        Reply to this comment

  7. Laurent:
    May 17, 2021 at 09:12 AM

    How is this different from RidgeRun's gstreamer plugin called https://github.com/RidgeRun/gst-interpipe and the accompanying gstreamer daemon https://github.com/RidgeRun/gstd-1.x ?

    Are they solving the same need?
    Is there an equivalent community effort to have what RidgeRun has developed?

    Reply to this comment

    Reply to this comment

    1. George Kiagiadakis:
      May 17, 2021 at 11:47 AM


      There are multiple similar plugins for interconnecting multiple GStreamer pipelines, however, they are all different in purpose and capabilities.
      In the case of the interpipe plugin, interpipe is meant to link multiple GStreamer pipelines in the same process, while ipcpipeline crosses the process boundary. Something closer to interpipe would be the inter and the gstproxy plugins.

      There's a useful cheat sheet with all the pipeline sharing/splitting plugins here: https://github.com/matthew1000/gstreamer-cheat-sheet/blob/master/sharing_and_splitting_pipelines.md
      Allow me also to add PipeWire in this list, which is not a GStreamer plugin or specific to GStreamer in any way, but it is also an efficient multimedia IPC mechanism for the case where you want to cross the process boundary and it can be combined with GStreamer for a complex multi-process application architecture.

      Regarding gstd, this is something different. It's a process like gst-launch that allows you to send control commands to it over the network in order to affect the state of the pipeline.

      I hope this helps to clear things up a bit...

      Reply to this comment

      Reply to this comment

      1. Laurent Denoue:
        May 17, 2021 at 03:53 PM

        Thanks George for the link to cheat sheet.
        I read the article linked to in that cheat http://blog.nirbheek.in/2018/02/decoupling-gstreamer-pipelines.html
        and it looks like GstInterpipe from RidgeRun might be comparable to gst-proxy then?
        I guess gst-proxy is more modern, and is part of gstreamer... Do you know of good examples built with gst-proxy?
        For gst-interpipe, RidgeRun seems to use it in their gstreamer daemon (gstd).


        Reply to this comment

        Reply to this comment

  8. karthck:
    Aug 11, 2021 at 10:15 AM

    can we construct pipeline across 2 laptops ?

    Reply to this comment

    Reply to this comment

    1. George Kiagiadakis:
      Aug 11, 2021 at 05:37 PM

      ipcpipeline is not intended for splitting pipelines across different machines. It may work with a few hacks, but it's probably best to go for another solution, like using GStreamer's RTP, UDP or TCP elements.

      Reply to this comment

      Reply to this comment

Add a Comment

Allowed tags: <b><i><br>Add a new comment:

Search the newsroom

Latest Blog Posts

Visual-inertial tracking for Monado


Monado now has initial support for 6DoF ("inside-out") tracking for devices with cameras and an IMU! Three free and open source SLAM/VIO…

Spotlight on Meson's full-featured developer environment


When developing an application or a library, it is very common to want to run it without installing it, or to install it into a custom prefix…

How to write a Vulkan driver in 2022


An incredible amount has changed in Mesa and in the Vulkan ecosystems since we wrote the first Vulkan driver in Mesa for Intel hardware…

Improving the reliability of file system monitoring tools


Every file system used in production has tools to try to recover from system crashes. To provide a better infrastructure for those tools,…

PipeWire: A year in review & a look ahead


The PipeWire project made major strides over the past few years, bringing shiny new features, and paving the way for new possibilities in…

Landing a new syscall, part 1: What is futex?


Over the past 18 months, we have been on a roller-coaster ride developing futex2, a new set of system calls. As part of this effort, the…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2022. All rights reserved. Privacy Notice. Sitemap.