Skip to content

LingyiHongfd/CompressTracker

Repository files navigation

[ICCV 2025] CompressTracker

The official implementation for ICCV 2025 CompressTracker General Compression Framework for Efficient Transformer Object Tracking.

High Efficiency

efficiency

Compression Framework

framework

News

[Jun. 25, 2025]

  • CompressTracker is accepted by ICCV 2025!

[Oct. 16, 2024]

  • Code is available now!

[Sep. 28, 2022]

  • We released our CompressTracker.

Highlights

🌟 General and High Efficient Compression Framework

Our CompressTracker can be applied to any transformer tracking models. Moreover, CompressTracker supports any arbitrary levels of compression.

Tracker GOT-10K (AO) LaSOT (AUC) TrackingNet (AUC) UAV123(AUC)
OSTrack-384 73.7 71.1 83.9 70.7
OSTrack-256 71.0 69.1 83.1 68.3

🌟 End-to-end and Simple Training

Our CompressTracker only needs an end-to-end and simple training instead of multi-stage distillation in MixFormerV2. The training cost is much lower than MixFormerV2.

🌟 Better Trade-off Between Accuracy and Speed

Our compressTracker achieves better trade-off between performance and speed.

Install the environment

Option1: Use the Anaconda (CUDA 11.7)

conda create -n compresstracker python=3.9
conda activate compresstracker
bash install.sh

Option2: Use the Anaconda (CUDA 11.7)

conda env create -f compresstracker_env.yaml

Set project paths

Run the following command to set paths for this project

python tracking/create_default_local_file.py --workspace_dir . --data_dir ./data --save_dir ./output

After running this command, you can also modify paths by editing these two files

lib/train/admin/local.py  # paths about training
lib/test/evaluation/local.py  # paths about testing

Data Preparation

Put the tracking datasets in ./data. It should look like this:

${PROJECT_ROOT}
 -- data
     -- lasot
         |-- airplane
         |-- basketball
         |-- bear
         ...
     -- got10k
         |-- test
         |-- train
         |-- val
     -- coco
         |-- annotations
         |-- images
     -- trackingnet
         |-- TRAIN_0
         |-- TRAIN_1
         ...
         |-- TRAIN_11
         |-- TEST

Training

Download pre-trained MAE ViT-Base weights and put it under $PROJECT_ROOT$/pretrained_models. Besides, please download the pretrained OSTrack weights and put it under $PROJECT_ROOT$/pretrained_models, too.

python tracking/train.py --script compresstracker --config compresstracker_vitb_256_4 --save_dir ./output --mode multiple --nproc_per_node 8

Replace --config with the desired model config under experiments/compresstracker.

It is worth noting that our CompressTracker can support any structure, any resolution, and any level of compression. We provide the code for CompressTracker-2/3/4/6 in our paper, and you can easily modify it to compress your own model.

Evaluation

Some testing examples:

  • LaSOT or other off-line evaluated benchmarks (modify --dataset correspondingly)
python tracking/test.py compresstracker compresstracker_vitb_256_4 --dataset lasot --threads 64 --num_gpus 8
python tracking/analysis_results.py # need to modify tracker configs and names
  • TrackingNet
python tracking/test.py compresstracker compresstracker_vitb_256_4 --dataset trackingnet --threads 64 --num_gpus 8
python lib/test/utils/transform_trackingnet.py --tracker_name compresstracker --cfg_name compresstracker_vitb_256_4

Test FLOPs, and Speed

Note: The speeds reported in our paper were tested on a single RTX2080Ti GPU.

# Profiling vitb_256_mae_ce_32x4_ep300
python tracking/profile_model.py --script compresstracker --config compresstracker_vitb_256_4

Acknowledgments

Citation

If our work is useful for your research, please consider citing:

@inproceedings{hong2025general,
  title={General compression framework for efficient transformer object tracking},
  author={Hong, Lingyi and Li, Jinglun and Zhou, Xinyu and Yan, Shilin and Guo, Pinxue and Jiang, Kaixun and Chen, Zhaoyu and Gao, Shuyong and Li, Runze and Sheng, Xingdong and others},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={13427--13437},
  year={2025}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published