October 11, 2022
Self-driving cars have the potential to change the paradigm of transportation. According to the U.S. Department of Transportation National Motor Vehicle Crash Causation Survey, 93% of all vehicle accidents are influenced by human error. Eliminating those accidents would be a giant leap forward, a safer means of transportation.
However, developing autonomous driving systems requires a tremendous amount of training images, usually collected and labelled by human labor, which is costly and error-prone. To make things worse, gathering such a vast amount of real driving images is challenging because we cannot artificially make unusual corner cases or peculiar weather and lighting conditions.
Over the past years, synthesized datasets from 3D game engines are gaining wide acceptance as a viable solution to tackle the problem. Besides these advances, monitoring and validating the data generation process is often still time-consuming and challenging.
Motivated by these observations, we implemented Carlafox, an open-source web-based CARLA visualizer that takes one step towards democratizing the daunting task of dataset generation. Essentially making image synthesis and automatic ground truth data generation maintainable, cheaper, and more repeatable.
CARLA is a 3D open-source simulator for autonomous driving. It provides methods for spawning various pre-defined vehicle models into the map, which can be controlled via a built-in lane following autopilot or by custom algorithms. CARLA comes with various maps that simulate environments from urban centers to countryside roads, including environmental presets for the time of the day and weather conditions.
Though CARLA is a 3D simulator, it does not have a built-in visualizer for any data other than simply viewing the scene. The Python example scripts included with CARLA use PyGame to display graphical user interfaces and do basic sensor data visualization; however, they are not capable of visualizing 3D LIDAR data or any combination of various sensors like bounding boxes projected on the camera data.
Carlaviz is a third-party web-based CARLA visualizer that can combine multiple data streams in a single visualization. Although, the layout and ability to customize data is limited.
With Carlafox, we take it a step further by providing a streamlined solution to visualize both recorded and live CARLA simulations.
To visualize the CARLA simulation, we first have to understand the CARLA actors and sensor capabilities.
The CARLA simulator allows to easily modify and place on-board sensors such as RGB cameras, depth cameras, radar, IMU, LiDAR, semantic LIDAR, weather conditions, and also the traffic scene to perform specific traffic cases.
The CARLA API supports custom sensor configurations as well. For example, it makes it possible to replicate a specific LIDAR configuration used on a real car.
CARLA provides a simple API to populate the scene with what they call Actors; that includes not only vehicles and walkers but also sensors, traffic signs, and traffic lights. In addition, instead of populating the world manually, CARLA comes with a traffic simulation, which comes in handy to automatically create a rich environment to train and test various autonomous driving stacks.
To visualize the CARLA data in Foxglove, we need to convert it to a format that Foxglove understands. Out of the box, Foxglove supports data via a running ROS1/ROS2 connection (i.e., a live simulation) or from a recorded ROS .bag file. To that end, we adapted and optimized the ROS-bridge project, which acts as a translation layer between CARLA and Foxglove and converts each CARLA sensor into a ROS message that Foxglove understands:
Next, we created a Foxglove layout that visualizes all this data. The layout features three main use cases: perception, planning, and diagnostics.
Drag and drop your own ROS bag files into Foxglove to get an immediate visual insight into your CARLA data. Or connect to a live simulation – we provide a live demo environment to test the setup quickly.
Our work could not have been possible without the help of countless open-source resources. We hope our contributions to Carlafox, Foxglove, and CARLA will help others in the automotive community to build the next generation of innovative technology.
If you have questions or ideas on how to visualize your data, join us on our Gitter #lounge channel or leave a comment below.
The testing ecosystem in the Linux kernel has been steadily growing, but are efforts sufficiently coordinated? How can we help developers…
With the upcoming 0.5 release, WirePlumber's Lua scripts will be transformed with the new Event Dispatcher. More modular and extensible…
This second installment explores the Rust libraries Collabora developed to decode video and how these libraries are used within ARCVM to…
Why is creating object graphs hard in Rust? In part 1, we looked at a basic pattern, where two types of objects refer to one another. In…
Text-to-speech (TTS) models are playing a transformative role, from enriching audiobooks to enhancing podcasts and even improving interactions…
In Linux, the Industrial Input/Output subsystem manages devices like Analog to Digital Converters, Light sensors, accelerometers, etc. On…