TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning

Han Cai, Chuang Gan, Ligeng Zhu, Song Han
MIT, MIT-IBM Watson AI Lab
(* indicates equal contribution)

News

Waiting for more news.

Awards

No items found.

Competition Awards

No items found.

Abstract

On-device learning enables edge devices to continually adapt the AI models to new data, which requires a small memory footprint to fit the tight memory constraint of edge devices. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn't directly translate to memory saving since the major bottleneck is the activations, not parameters. In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning. TinyTL freezes the weights while only learns the bias modules, thus no need to store the intermediate activations. To maintain the adaptation capacity, we introduce a new memory-efficient bias module, the lite residual module, to refine the feature extractor by learning small residual feature maps adding only 3.8% memory overhead. Extensive experiments show that TinyTL significantly saves the memory (up to 6.5x) with little accuracy loss compared to fine-tuning the full network. Compared to fine-tuning the last layer, TinyTL provides significant accuracy improvements (up to 34.1%) with little memory overhead. Furthermore, combined with feature extractor adaptation, TinyTL provides 7.3-12.9x memory saving without sacrificing accuracy compared to fine-tuning the full Inception-V3.

Adapting AI Models to New Data Collected on Edge

  • On-device learning: better privacy, lower cost, customization, life-long learning

Activation is the Main Bottleneck for On-Device Training

TinyTL: Memory-Efficient Transfer Learning

  • Freeze weights, only fine-tune biases to avoid the requirement of storing intermediate activations.
  • Add lite residual modules to increase model capacity while maintaining efficiency.

Experiment Results

Video

Citation

@inproceedings{    

cai2020tinytl,    

title={TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning},    

author={Cai, Han and Gan, Chuang and Zhu, Ligeng and Han, Song},    

booktitle={Advances in Neural Information Processing Systems},    

volume={33},    

year={2020}

}

Media

No media articles found.

Acknowledgment

We thank MIT-IBM Watson AI Lab, NSF CAREER Award #1943349 and NSF Award #2028888 for supporting this research. We thank MIT Satori cluster for providing the computation resource.

Team Members