We're hiring!

Carlafox, an open-source web-based CARLA visualizer

Vineet Suryan avatar

Vineet Suryan
October 11, 2022

Share this post:

Reading time:

Carlafox - Open source visualization and debugging


Key takeaways

A closer look into CARLA

Visualizing CARLA with Foxglove Studio

Self-driving cars have the potential to change the paradigm of transportation. According to the U.S. Department of Transportation National Motor Vehicle Crash Causation Survey, 93% of all vehicle accidents are influenced by human error. Eliminating those accidents would be a giant leap forward, a safer means of transportation.

However, developing autonomous driving systems requires a tremendous amount of training images, usually collected and labelled by human labor, which is costly and error-prone. To make things worse, gathering such a vast amount of real driving images is challenging because we cannot artificially make unusual corner cases or peculiar weather and lighting conditions.

Over the past years, synthesized datasets from 3D game engines are gaining wide acceptance as a viable solution to tackle the problem. Besides these advances, monitoring and validating the data generation process is often still time-consuming and challenging.

Motivated by these observations, we implemented Carlafox, an open-source web-based CARLA visualizer that takes one step towards democratizing the daunting task of dataset generation. Essentially making image synthesis and automatic ground truth data generation maintainable, cheaper, and more repeatable.

Key takeaways

  • Datasets for use in computer vision machine learning are often challenging to acquire. Often, datasets are created either using hand-labeling or via expensive measurements.
  • Using a virtual simulation where labels are known can be used to generate large datasets virtually for free. Research shows that simulated and real data compliment each other and using both results in better AI models.
  • Monitoring virtual environments is crucial in the process of creating data and testing various computer-vision solutions.
  • We developed Carlafox - a web-based CARLA visualizer that can combine multiple data streams, including channels for custom shapes and text, in a single visualization to solve this problem.

A closer look into CARLA

CARLA is a 3D open-source simulator for autonomous driving. It provides methods for spawning various pre-defined vehicle models into the map, which can be controlled via a built-in lane following autopilot or by custom algorithms. CARLA comes with various maps that simulate environments from urban centers to countryside roads, including environmental presets for the time of the day and weather conditions.

Though CARLA is a 3D simulator, it does not have a built-in visualizer for any data other than simply viewing the scene. The Python example scripts included with CARLA use PyGame to display graphical user interfaces and do basic sensor data visualization; however, they are not capable of visualizing 3D LIDAR data or any combination of various sensors like bounding boxes projected on the camera data.

Carlaviz is a third-party web-based CARLA visualizer that can combine multiple data streams in a single visualization. Although, the layout and ability to customize data is limited.

With Carlafox, we take it a step further by providing a streamlined solution to visualize both recorded and live CARLA simulations.

To visualize the CARLA simulation, we first have to understand the CARLA actors and sensor capabilities.


The CARLA simulator allows to easily modify and place on-board sensors such as RGB cameras, depth cameras, radar, IMU, LiDAR, semantic LIDAR, weather conditions, and also the traffic scene to perform specific traffic cases.

The CARLA API supports custom sensor configurations as well. For example, it makes it possible to replicate a specific LIDAR configuration used on a real car.

Actors -- cars and pedestrians

CARLA provides a simple API to populate the scene with what they call Actors; that includes not only vehicles and walkers but also sensors, traffic signs, and traffic lights. In addition, instead of populating the world manually, CARLA comes with a traffic simulation, which comes in handy to automatically create a rich environment to train and test various autonomous driving stacks.

Visualizing CARLA with Foxglove Studio

To visualize the CARLA data in Foxglove, we need to convert it to a format that Foxglove understands. Out of the box, Foxglove supports data via a running ROS1/ROS2 connection (i.e., a live simulation) or from a recorded ROS .bag file. To that end, we adapted and optimized the ROS-bridge project, which acts as a translation layer between CARLA and Foxglove and converts each CARLA sensor into a ROS message that Foxglove understands:

  • Camera, semantic image, and depth imagesensor_msgs/CompressedImage
  • LIDAR, semantic LIDAR, and radar sensorsensor_msgs/PointCloud2
  • Raster mapsnav_msgs/OccupancyGrid
  • Car's positiongeometry_msgs/PoseStamped, sensor_msgs/NavSatFix, tf2_msgs/TFMessage
  • Additional data superimposed on camera imagesfoxglove_msgs/ImageMarkerArray

Next, we created a Foxglove layout that visualizes all this data. The layout features three main use cases: perception, planning, and diagnostics.

  • Planning: The most prominent part of the layout is the 3D panel, which shows the world from a bird's-eye view – with bounding box annotations, LIDAR returns, the LIDAR raster map, vehicle trajectory, etc.
  • Perception: The right part of the layout focuses on visualizing the camera data; it shows the camera feed, including the projected LIDAR and bounding box annotations on top of it, and shows a semantic segmentation of the camera view where every object is displayed in a different color according to the object class as well as a depth map representation.
  • Diagnostics: The remaining part of the layout shows the velocity and acceleration of the car.

Drag and drop your own ROS bag files into Foxglove to get an immediate visual insight into your CARLA data. Or connect to a live simulation – we provide a live demo environment to test the setup quickly.

To see Carlafox in action, click here to give our demo a try. You can also visit our GitHub repo to obtain the source code, which is readily available.

Our work could not have been possible without the help of countless open-source resources. We hope our contributions to Carlafox, Foxglove, and CARLA will help others in the automotive community to build the next generation of innovative technology.

If you have questions or ideas on how to visualize your data, join us on our Gitter #lounge channel or leave a comment below.

Comments (0)

Add a Comment

Allowed tags: <b><i><br>Add a new comment:

Search the newsroom

Latest Blog Posts

Advocating a better Kernel Integration for all


The testing ecosystem in the Linux kernel has been steadily growing, but are efforts sufficiently coordinated? How can we help developers…

WirePlumber: Exploring Lua scripts with Event Dispatcher


With the upcoming 0.5 release, WirePlumber's Lua scripts will be transformed with the new Event Dispatcher. More modular and extensible…

A roadmap for VirtIO Video on ChromeOS: part 2


This second installment explores the Rust libraries Collabora developed to decode video and how these libraries are used within ARCVM to…

Persian Rug, Part 2 - Other ways to make object soups in Rust


Why is creating object graphs hard in Rust? In part 1, we looked at a basic pattern, where two types of objects refer to one another. In…

WhisperSpeech: Exploring new horizons in text-to-speech technology


Text-to-speech (TTS) models are playing a transformative role, from enriching audiobooks to enhancing podcasts and even improving interactions…

Bridging IIO and Input in Linux


In Linux, the Industrial Input/Output subsystem manages devices like Analog to Digital Converters, Light sensors, accelerometers, etc. On…

Open Since 2005 logo

We use cookies on this website to ensure that you get the best experience. By continuing to use this website you are consenting to the use of these cookies. To find out more please follow this link.

Collabora Ltd © 2005-2023. All rights reserved. Privacy Notice. Sitemap.