Xiongwei Zhao1, Xieyuanli Chen2, Xu Zhu1, Xingxiang Xie3, Haojie Bai1, Congcong Wen4, Rundong Zhou5, Qihao Sun1
1Harbin Institute of Technology 2National University of Defense Technology 3Shenzhen Institute of Information Technology 4Harvard University 5Shenzhen Polytechnic University
Abstract: LiDAR place recognition (LPR) plays a vital role in autonomous navigation. However, existing LPR methods struggle to maintain robustness under adverse weather conditions such as rain, snow, and fog, where weather-induced noise and point cloud degradation impair LiDAR reliability and perception accuracy. To tackle these challenges, we propose an Iterative Task-Driven Framework (ITDNet), which integrates a LiDAR Data Restoration (LDR) module and a LiDAR Place Recognition (LPR) module through an iterative learning strategy. These modules are jointly trained end-to-end, with alternating optimization to enhance performance. The core rationale of ITDNet is to leverage the LDR module to recover the corrupted point clouds while preserving structural consistency with clean data, thereby improving LPR accuracy in adverse weather. Simultaneously, the LPR task provides feature pseudo-labels to guide the LDR module's training, aligning it more effectively with the LPR task. To achieve this, we first design a task-driven LPR loss and a reconstruction loss to jointly supervise the optimization of the LDR module. Furthermore, for the LDR module, we propose a Dual-Domain Mixer (DDM) block for frequency-spatial feature fusion and a Semantic-Aware Generator (SAG) block for semantic-guided restoration. In addition, for the LPR module, we introduce a Multi-Frequency Transformer (MFT) block and a Wavelet Pyramid NetVLAD (WPN) block to aggregate multi-scale, robust global descriptors. Finally, extensive experiments on Weather-KITTI, Boreas, and our proposed Weather-Apollo datasets demonstrate that, ITDNet outperforms existing LPR methods, achieving state-of-the-art performance in adverse weather.
See INSTALL.md for the installation of dependencies and dataset preperation required to run this codebase.
After preparing the training data, you can start training ITDNet on different datasets using either single-GPU or multi-GPU settings.
- Distributed training (4 GPUs):
torchrun --nproc_per_node=4 train_itdnet_kitti_dis.py
- Single-GPU training:
python train_itdnet_kitti.py
- Distributed training (4 GPUs):
torchrun --nproc_per_node=4 train_itdnet_boreas_dis.py
- Distributed training (4 GPUs):
torchrun --nproc_per_node=4 train_itdnet_apollo_dis.py
Note: Single-GPU versions for Boreas and Weather-Apollo can be added similarly if needed.
python test_itdnet_kitti.pypython test_itdnet_boreas.pyIf you find our work useful in your research, please consider citing:
@article{zhao2025iterative,
title={An Iterative Task-Driven Framework for Resilient LiDAR Place Recognition in Adverse Weather},
author={Zhao, Xiongwei and Chen, Xieyuanli and Zhu, Xu and Xie, Xingxiang and Bai, Haojie and Wen, Congcong and Zhou, Rundong and Sun, Qihao},
journal={arXiv preprint arXiv:2504.14806},
year={2025}
}Should you have any questions, please contact [email protected]
Acknowledgment: This code is based on the PromptIR, TripleMixer and OverlapTransformer repositories.
The code and datasets are provided under the Apache-2.0 license
