LSQ+ net or LSQplus net and LSQ net
2023-01-08
Dorefa and Pact, https://github.com/ZouJiu1/Dorefa_Pact
--------------------------------------------------------------------------------------------------------------
add torch.nn.Parameter .data, retrain models 18-01-2022
I'm not the author, I just complish an unofficial implementation of LSQ+ or LSQplus and LSQ,the origin paper you can find LSQ+ here arxiv.org/abs/2004.09576 and LSQ here arxiv.org/abs/1902.08153.
pytorch==1.8.1
You should train 32-bit float model firstly, then you can finetune a low bit-width quantization QAT model by loading the trained 32-bit float model
Dataset used for training is CIFAR10 and model used is Resnet18 revised
lsqplus_quantize_V1.py: initialize s、beta of activation quantization according to LSQ+ LSQ+: Improving low-bit quantization through learnable offsets and better initialization
lsqplus_quantize_V2.py: initialize s、beta of activation quantization according to min max values
lsqquantize_V1.py:initialize s of activation quantization according to LSQ Learned Step Size Quantization
lsqquantize_V2.py: initialize s of activation quantization = 1
lsqplus_quantize_V2.py has the best result when use cifar10 dataset
all
A represent activation, I use moving average method to initialize s and beta.
LEARNED STEP SIZE QUANTIZATION
LSQ+: Improving low-bit quantization through learnable offsets and better initialization
https://github.com/666DZY666/micronet
https://github.com/hustzxd/LSQuantization
https://github.com/zhutmost/lsq-net
https://github.com/Zhen-Dong/HAWQ
https://github.com/KwangHoonAn/PACT
https://github.com/Jermmy/pytorch-quantization-demo