Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

README.md

TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning [website]

@inproceedings{
  cai2020tinytl,
  title={TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning},
  author={Cai, Han and Gan, Chuang and Zhu, Ligeng and Han, Song},
  booktitle={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

On-Device Learning, not Just Inference

Activation is the Main Bottleneck, not Parameters

Tiny Transfer Learning

Transfer Learning Results

Combining with Batch Size 1 Training

Data Preparation

To set up the datasets, please run bash make_all_datasets.sh under the folder dataset_setup_scripts.

Requirement

  • Python 3.6+
  • Pytorch 1.4.0+

How to Run Transfer Learning Experiments

To run transfer learning experiments, please first set up the datasets and then run tinytl_fgvc_train.py. Scripts are available under the folder exp_scripts.

TODO

  • Add system support for TinyTL

Related Projects

MCUNet: Tiny Deep Learning on IoT Devices (NeurIPS'20, spotlight)

Once for All: Train One Network and Specialize it for Efficient Deployment (ICLR'20)

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR'19)

AutoML for Architecting Efficient and Specialized Neural Networks (IEEE Micro)

AMC: AutoML for Model Compression and Acceleration on Mobile Devices (ECCV'18)

HAQ: Hardware-Aware Automated Quantization (CVPR'19, oral)