TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning [website]
@inproceedings{
cai2020tinytl,
title={TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning},
author={Cai, Han and Gan, Chuang and Zhu, Ligeng and Han, Song},
booktitle={Advances in Neural Information Processing Systems},
volume={33},
year={2020}
}To set up the datasets, please run bash make_all_datasets.sh under the folder dataset_setup_scripts.
- Python 3.6+
- Pytorch 1.4.0+
To run transfer learning experiments, please first set up the datasets and then run tinytl_fgvc_train.py. Scripts are available under the folder exp_scripts.
- Add system support for TinyTL
MCUNet: Tiny Deep Learning on IoT Devices (NeurIPS'20, spotlight)
Once for All: Train One Network and Specialize it for Efficient Deployment (ICLR'20)
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR'19)
AutoML for Architecting Efficient and Specialized Neural Networks (IEEE Micro)
AMC: AutoML for Model Compression and Acceleration on Mobile Devices (ECCV'18)
HAQ: Hardware-Aware Automated Quantization (CVPR'19, oral)




