Aivero at GStreamer Conference 2019 in Lyon​

Aivero at GStreamer Conference 2019 in Lyon

GStreamer is the number one framework for multimedia handling and is used by a vast range of open and closed source projects and products. Remember the 2017 Nobel prize for proving the existence of gravitational waves? – Research powered by GStreamer. It also happens to be used on virtually every SmartTV. At Aivero we use it as one of our building blocks.

The annual GStreamer conference gathers developers from around the globe working on projects ranging from CCTV and industrial machine vision applications to smart TVs and web streaming platforms. GStreamer being an open source project it is developed by companies using it and coordinated by the team of Centricular Ltd. The conference is always followed by a GStreamer Hackfest, giving participants the opportunity to work on common or company specific GStreamer issues with the chance to exchange insights and solutions in an unofficial setting.

Niclas, Kasper, Andrej and Raphael in Lyon

During the conference, our CTO Raphael Dürscheid, presented our recent open source work on adding support for RGB-D or depth cameras, as well as introducing our proprietary depth video compression to GStreamer.

The components we open sourced allow developers to work with all video streams coming from Intel’s Realsense cameras, as well as Microsoft’s Azure Kinect camera, soon. 

We defined a simple interface that can support all types of RGB-D cameras and we encourage developers interested in working with these cameras to consider GStreamer and all the high-performing video handling it offers.

Watch Raphael’s talk here

Raphael on stage in Lyon

Niclas gave a lightning talk on how we are using conan.io for handling all our building and packaging of our dependencies and our GStreamer elements written in rust and C++.

Watch Niclas’ talk here

Niclas doing his lightning talk about conan.io

We have open sourced a set of GStreamer elements designed to allow opening the video streams from RGB-D cameras. Explicitly we support the Intel RealSense D400 series cameras and are close to supporting Microsofts new Kinect for Azure.

The source elements above produce our newly defined (open source) `video/rgbd` interface, often called a `caps type` in GStreamer. This interface gives element developers access to the frames from all enabled devices onboard the camera, taken at the same time. Certain types of post-processing will benefit from low-overhead access to this matched set of frames.

Furthermore, we released the `rgbdmux` and rgbddemux` elements, respectively muxing and demuxing our `video/rgbd` to the contained elementary streams, i.e. on the D400 series these would be `depth, infra1, infra2, colour`.

These demuxed video streams can now be used like any other video stream inside GStreamer, giving developers access to all the powerful tools GStreamer provides.

rgbdmux, rgbddemux: https://gitlab.com/aivero/public/gstreamer/gst-rgbd/-/releases#0.1.5

Similar Posts

  • Overview of cobots

    Collaborative robots (cobots) differ on variables such as weight, payload, reach and accuracy. These set the condition for what tasks a cobot can perform, thus affecting which one is best suited for your environment. However, they all have one thing in common: they shine in their ability to perform repetitive tasks, and with Aivero Do, this is scalable across multiple units and locations.

  • Enhancing Machine Learning Models in Observational Research with Aivero

    In observational research, the effectiveness of machine learning hinges on the quality of data. As the adage goes, “garbage in, garbage out,” high-quality data is imperative. Aivero transforms this challenge into an opportunity with its versatile software-based platform. Data Quality: The Core of Machine Learning Success Aivero’s platform stands out for its hardware-agnostic capabilities. Whether…

  • Welcome to the team!

    Despite being a distributed organization across countries, we are still a close-knit group of individuals at Aivero. This past year we are happy to have welcomed Michaela, Jimmi, Isak and Daniel to our team. Bringing their areas of expertise, from engineering to marketing, increasing our ability to improve performance in research and robotics, making AI services accessible to a broader set of users. So, what does it take to translate machine vision into a plug-and-play solution for monitoring and automation?