Zhengxue Wang1, Yuan Wu1, Xiang Li2, Zhiqiang Yan✉3, Jian Yang✉1
✉Corresponding author
1Nanjing University of Science and Technology
2Nankai University
3National University of Singapore
Overview of STDNet. Given
Please refer to 'env.yaml'.
All pretrained models can be found here.
All datasets can be downloaded from the following link:
Additionally, we provide a DyDToF test subset in the 'dataset' folder for quick implementation, with the corresponding index file is 'data/dydtof_list/school_shot8_subset.txt'.
cd STDNet
mkdir -p experiment/SRDNet_$scale$/MAE_best
python -m torch.distributed.launch --nproc_per_node 2 train.py --scale 4 --result_root 'experiment/SRDNet_$scale$' --result_root_MAE 'experiment/SRDNet_$scale$/MAE_best'
### TarTanAir dataset
python test_TarTanAir.py --scale 4
### DyDToF dataset
python test_DyDToF.py --scale 4
### DyDToF dataset
python test_DynamicReplica.py --scale 4
Quantitative comparisons between our STDNet and previous state-of-the-art methods on TarTanAir dataset.
If our method proves to be of any assistance, please consider citing:
@article{wang2025spatiotemporal,
title={SpatioTemporal Difference Network for Video Depth Super-Resolution},
author={Wang, Zhengxue and Wu, Yuan and Li, Xiang and Yan, Zhiqiang and Yang, Jian},
journal={arXiv preprint arXiv:2508.01259},
year={2025}
}





