CARLA Simulator in Docker in 2023

Antoine C.
6 min readFeb 21, 2023

--

CARLA is an open-source simulator for autonomous driving research that provides a realistic environment for testing and developing autonomous vehicles. In this article, we’ll be exploring how to run CARLA in Docker on Ubuntu. This is an update of a previous article, which was written for an older version of CARLA and Ubuntu 20.04.

Docker is a popular containerization technology that allows you to package an application and all of its dependencies into a single image. This image can then be run on any machine that has Docker installed, without worrying about the host operating system or environment. This makes it an ideal tool for running CARLA, as it provides a number of advantages, including:

  • Repeatability: Docker can help ensure that your CARLA environment is consistent across different machines and operating systems, making it easier to reproduce and share your experiments.
  • Portability: Docker allows you to easily move your CARLA environment between different machines or cloud platforms, without worrying about compatibility issues.
  • Isolation: Docker containers provide a level of isolation from the host system, which can help prevent conflicts with other software installed on the host machine.

In this article, we’ll walk you through the steps to get CARLA up and running in a Docker container on Ubuntu, so that you can start experimenting with autonomous driving research in a portable and repeatable way.

GPU Capabilities

CARLA is a computationally intensive application that requires significant GPU resources to run efficiently. By default, Docker containers do not have access to the host system’s GPU, which can limit the performance of CARLA simulations. To address this issue, we will be using Nvidia runtime for Docker, which provides access to the host system’s GPU within Docker containers. This will allow us to take full advantage of the GPU resources available on our system, which can significantly improve the performance of CARLA simulations. In this section, we’ll walk you through the steps to install Nvidia runtime for Docker and configure your Docker environment to use it with CARLA.

To install Nvidia runtime for Docker, follow these steps:

  • Make sure that nvidia drivers are installed on your computer (you must be able to run nvidia-smi command flawlessly.
  • Add the GPG keys:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
  • Update and install:
sudo apt-get update && sudo apt-get install -y nvidia-docker2
  • Restart the deamon:
sudo systemctl restart docker

After completing these steps, you should have Nvidia runtime installed on your system and available for use with Docker.

You can test the install with the following command:

sudo docker run --rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

Docker-compose at the rescue

Now that we have Nvidia runtime for Docker installed and configured, we can proceed to set up the Docker environment for running CARLA. We’ll use a Docker Compose file to define the services and dependencies required for running CARLA in Docker.

Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define the services, networks, and volumes required for your application in a single YAML file, making it easy to manage and deploy your Docker applications. Before proceeding with this section, ensure that you have Docker Compose installed on your system.

In the Docker Compose file, we’ll define the environment variables and volumes required to run CARLA with Nvidia runtime. We’ll also set up a few additional services, such as a separate container for running the CARLA simulator GUI.

By defining the services and dependencies in the Docker Compose file, we can easily spin up a fully configured CARLA environment with a single command, regardless of the host system. In the next section, we’ll walk you through the steps to create the Docker Compose file and launch the CARLA environment in Docker.

Here is the docker-compose file:

In the presented Docker Compose file, we’ve fixed the CARLA version to 0.9.14, the latest stable release as of today. This ensures that our code and setup remains stable over time, and that we won’t encounter any surprises due to unexpected updates or changes to the CARLA environment. Additionally, we’ve specified that the CARLA environment should use the Vulkan graphics engine, which is the default for the latest version of CARLA.

To make the CARLA environment available on the host system, we’ve opened several ports that are used by the CARLA simulator and the CARLA Python client. These ports include port 2000, which is used for the simulator, and port 2002, which is used for the Python client.

To ensure that the container has the required graphical acceleration requirements, we’ve defined several environment variables and volumes. For instance, the DISPLAY environment variable is set to the host's display, which allows the CARLA simulator GUI to be rendered on the host system. Additionally, we've mounted the host system's X11 socket which allows the container to access the host system's graphical acceleration capabilities.

By defining these services and dependencies in the Docker Compose file, we can easily spin up a fully configured CARLA environment in Docker with a single command. In the next section, we’ll walk through the steps to launch the CARLA environment and start running simulations.

Run it!

Now that we have created the Docker Compose file with all the necessary configuration, we can simply run the following command to start the CARLA environment:

sudo docker-compose up

This will start the CARLA simulator, and make it available on the host system using the ports we opened in the Docker Compose file. The simulator GUI will be rendered on the host’s display, and you can start interacting with it using a Python client running on the host.

If you want to run the CARLA environment in the background, you can add the -d flag to the docker-compose up command:

sudo docker-compose up -d

This will start the environment in detached mode, which means that it will run in the background without any output to the terminal. To stop the environment, you can run the following command:

sudo docker-compose down

This will stop and remove all the containers, networks, and volumes created by the docker-compose up command. With these simple commands, you can easily manage the CARLA environment in Docker and start running simulations right away.

A Python client?

To use the CARLA Python client, you will need to have Python 3.8 or earlier installed on your system. You can check your current Python version by running the command python3 --version.

If you need to install Python 3.8, you can use the following commands:

sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.8 python3.8-distutils

Once you have Python 3.8 installed, you can install the CARLA Python client using pip3. To install the client for version 0.9.14, simply run:

python3.8 -m pip install carla==0.9.14

After the installation, you can use the CARLA Python client to interact with the simulator using Python code, as shown in the following example. With the CARLA environment running in Docker and the CARLA Python client installed, you have everything you need to start experimenting with autonomous vehicle control and testing your algorithms in a realistic simulated environment.

import carla
import random

# Connect to the CARLA simulator
client = carla.Client('localhost', 2000)
client.set_timeout(2.0)

# Get the world and blueprint library
world = client.get_world()
blueprint_library = world.get_blueprint_library()

#get spawn points
spawn_points = world.get_map().get_spawn_points()

# Spawn a vehicle
vehicle_bp = blueprint_library.filter('vehicle.tesla.model3')[0]
vehicle = world.spawn_actor(vehicle_bp, random.choice(spawn_points))

for v in world.get_actors().filter('*vehicle*'):
v.set_autopilot(True)

--

--

Antoine C.

Former PhD Student working on cooperative perception for vehicles, now a postdoc fellow in Japan. Technology enthusiast, (astro)photography and RF aficionados.