We're hiring!
*

Carlafox, an open-source web-based CARLA visualizer

Vineet Suryan avatar

Vineet Suryan
October 11, 2022

Share this post:

Reading time:

Carlafox - Open source visualization and debugging

Introduction

Key takeaways

A closer look into CARLA

Visualizing CARLA with Foxglove Studio

Self-driving cars have the potential to change the paradigm of transportation. According to the U.S. Department of Transportation National Motor Vehicle Crash Causation Survey, 93% of all vehicle accidents are influenced by human error. Eliminating those accidents would be a giant leap forward, a safer means of transportation.

However, developing autonomous driving systems requires a tremendous amount of training images, usually collected and labelled by human labor, which is costly and error-prone. To make things worse, gathering such a vast amount of real driving images is challenging because we cannot artificially make unusual corner cases or peculiar weather and lighting conditions.

Over the past years, synthesized datasets from 3D game engines are gaining wide acceptance as a viable solution to tackle the problem. Besides these advances, monitoring and validating the data generation process is often still time-consuming and challenging.

Motivated by these observations, we implemented Carlafox, an open-source web-based CARLA visualizer that takes one step towards democratizing the daunting task of dataset generation. Essentially making image synthesis and automatic ground truth data generation maintainable, cheaper, and more repeatable.

Key takeaways

  • Datasets for use in computer vision machine learning are often challenging to acquire. Often, datasets are created either using hand-labeling or via expensive measurements.
  • Using a virtual simulation where labels are known can be used to generate large datasets virtually for free. Research shows that simulated and real data compliment each other and using both results in better AI models.
  • Monitoring virtual environments is crucial in the process of creating data and testing various computer-vision solutions.
  • We developed Carlafox - a web-based CARLA visualizer that can combine multiple data streams, including channels for custom shapes and text, in a single visualization to solve this problem.

A closer look into CARLA

CARLA is a 3D open-source simulator for autonomous driving. It provides methods for spawning various pre-defined vehicle models into the map, which can be controlled via a built-in lane following autopilot or by custom algorithms. CARLA comes with various maps that simulate environments from urban centers to countryside roads, including environmental presets for the time of the day and weather conditions.

Though CARLA is a 3D simulator, it does not have a built-in visualizer for any data other than simply viewing the scene. The Python example scripts included with CARLA use PyGame to display graphical user interfaces and do basic sensor data visualization; however, they are not capable of visualizing 3D LIDAR data or any combination of various sensors like bounding boxes projected on the camera data.

Carlaviz is a third-party web-based CARLA visualizer that can combine multiple data streams in a single visualization. Although, the layout and ability to customize data is limited.

With Carlafox, we take it a step further by providing a streamlined solution to visualize both recorded and live CARLA simulations.

To visualize the CARLA simulation, we first have to understand the CARLA actors and sensor capabilities.

Sensors:

The CARLA simulator allows to easily modify and place on-board sensors such as RGB cameras, depth cameras, radar, IMU, LiDAR, semantic LIDAR, weather conditions, and also the traffic scene to perform specific traffic cases.

The CARLA API supports custom sensor configurations as well. For example, it makes it possible to replicate a specific LIDAR configuration used on a real car.

Actors -- cars and pedestrians

CARLA provides a simple API to populate the scene with what they call Actors; that includes not only vehicles and walkers but also sensors, traffic signs, and traffic lights. In addition, instead of populating the world manually, CARLA comes with a traffic simulation, which comes in handy to automatically create a rich environment to train and test various autonomous driving stacks.

Visualizing CARLA with Foxglove Studio

To visualize the CARLA data in Foxglove, we need to convert it to a format that Foxglove understands. Out of the box, Foxglove supports data via a running ROS1/ROS2 connection (i.e., a live simulation) or from a recorded ROS .bag file. To that end, we adapted and optimized the ROS-bridge project, which acts as a translation layer between CARLA and Foxglove and converts each CARLA sensor into a ROS message that Foxglove understands:

  • Camera, semantic image, and depth imagesensor_msgs/CompressedImage
  • LIDAR, semantic LIDAR, and radar sensorsensor_msgs/PointCloud2
  • Raster mapsnav_msgs/OccupancyGrid
  • Car's positiongeometry_msgs/PoseStamped, sensor_msgs/NavSatFix, tf2_msgs/TFMessage
  • Additional data superimposed on camera imagesfoxglove_msgs/ImageMarkerArray

Next, we created a Foxglove layout that visualizes all this data. The layout features three main use cases: perception, planning, and diagnostics.

  • Planning: The most prominent part of the layout is the 3D panel, which shows the world from a bird's-eye view – with bounding box annotations, LIDAR returns, the LIDAR raster map, vehicle trajectory, etc.
  • Perception: The right part of the layout focuses on visualizing the camera data; it shows the camera feed, including the projected LIDAR and bounding box annotations on top of it, and shows a semantic segmentation of the camera view where every object is displayed in a different color according to the object class as well as a depth map representation.
  • Diagnostics: The remaining part of the layout shows the velocity and acceleration of the car.

Drag and drop your own ROS bag files into Foxglove to get an immediate visual insight into your CARLA data. Or connect to a live simulation – we provide a live demo environment to test the setup quickly.

To see Carlafox in action, click here to give our demo a try. You can also visit our GitHub repo to obtain the source code, which is readily available.

Our work could not have been possible without the help of countless open-source resources. We hope our contributions to Carlafox, Foxglove, and CARLA will help others in the automotive community to build the next generation of innovative technology.

If you have questions or ideas on how to visualize your data, join us on our Gitter #lounge channel or leave a comment below.

Comments (0)


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Automatic regression handling and reporting for the Linux Kernel

14/03/2024

In continuation with our series about Kernel Integration we'll go into more detail about how regression detection, processing, and tracking…

Almost a fully open-source boot chain for Rockchip's RK3588!

21/02/2024

Now included in our Debian images & available via our GitLab, you can build a complete, working BL31 (Boot Loader stage 3.1), and replace…

What's the latest with WirePlumber?

19/02/2024

Back in 2022, after a series of issues were found in its design, I made the call to rework some of WirePlumber's fundamentals in order to…

DRM-CI: A GitLab-CI pipeline for Linux kernel testing

08/02/2024

Continuing our Kernel Integration series, we're excited to introduce DRM-CI, a groundbreaking solution that enables developers to test their…

Persian Rug, Part 4 - The limitations of proxies

23/01/2024

This is the fourth and final part in a series on persian-rug, a Rust crate for interconnected objects. We've touched on the two big limitations:…

How to share code between Vulkan and Gallium

16/01/2024

One of the key high-level challenges of building Mesa drivers these days is figuring out how to best share code between a Vulkan driver…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2024. All rights reserved. Privacy Notice. Sitemap.