This repo contains the details for "Contrastive Learning-based Place Descriptor Representation for Cross-modality Place Recognition".
Our experiment is tested on Ubuntu 20.04 with Python 3.8 with PyTorch 1.13.6.
- build environment
conda create -n tmnet python=3.8 conda activate tmnet pip install -r requirements.txt
We conduct the image-to-point-cloud place recognition based on KITTI dataset and KITTI-360 dataset.
-
The KITTI DATASET The used data for the experiment can be downloaded from here.
Folder structure:
data βββ sequences β βββ 00 β βββ image_2 β βββ Velodyne β βββ poses.txt βββ ... -
The KITTI-360 DATASET The used data can be downloaded from here.
Folder structure:
data βββ data_2d_raw β βββ 2013_05_28_drive_0002_sync β β βββ image_00 β β βββ ... β βββ ... βββ data_3d_raw β βββ 2013_05_28_drive_0002_sync β β βββ image_00 β β βββ ... β βββ ... βββ data_poses βββ 2013_05_28_drive_0002_sync β βββ image_00 β βββ ... βββ ...
Our image-to-point-cloud place recognition on the unseen KITTI test sequence.
