-
Create python environment with python=3.8
conda create -n mvconfig python=3.8 conda activate mvconfig
-
Please install PyTorch with CUDA support with
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
-
Then, install all other dependencies via
pip install -r requirements.txt
and then install lap separately via
conda install -c conda-forge lap=0.4.0
-
Verify that Carla works through docker container
docker run --privileged --gpus 1 --net=host -e DISPLAY=$DISPLAY carlasim/carla:0.9.14 /bin/bash ./CarlaUE4.sh
Running the MVconfig code requires the CARLA simulator. We recommend using the CARLA Docker image.
-
To install the Docker container, please refer to the Docker Engine installation guide and the NVIDIA Container Toolkit installation guide.
-
The CARLA Docker image is available on Docker Hub, which can be pulled using the following command
docker pull carlasim/carla:0.9.14
In order to train or test the detection and tracking model, as well as the camera control module, please follow the instructions below.
-
Train model with default configuration
CUDA_VISIBLE_DEVICES=0 python main.py -d carlax --reID --carla_gpu 0 --carla_cfg [cfg_name] --record_loss --carla_port 2000 --carla_tm_port 8000
-
Train model with three human expert configurations
CUDA_VISIBLE_DEVICES=0 python main.py -d carlax --reID --carla_gpu 0 --carla_cfg [cfg_name]_[1/2/3] --record_loss --carla_port 2000 --carla_tm_port 8000
-
Train model with interactive cameras and joint training
CUDA_VISIBLE_DEVICES=0 python main.py -d carlax --reID --interactive --carla_gpu 0 --carla_cfg [cfg_name] --epochs 50 --joint_training 1 --record_loss --carla_port 2000 --carla_tm_port 8000
-
Evaluate the trained model (with or without interactive cameras)
CUDA_VISIBLE_DEVICES=0 python main.py -d carlax --reID [--interactive] --carla_gpu 0 --carla_cfg [cfg_name] --carla_port 2000 --carla_tm_port 8000 --eval --resume [log_path]