Trainable Fixed-Point Quantization for Deep Learning Acceleration on FPGAs
CoRR(2024)
摘要
Quantization is a crucial technique for deploying deep learning models on
resource-constrained devices, such as embedded FPGAs. Prior efforts mostly
focus on quantizing matrix multiplications, leaving other layers like BatchNorm
or shortcuts in floating-point form, even though fixed-point arithmetic is more
efficient on FPGAs. A common practice is to fine-tune a pre-trained model to
fixed-point for FPGA deployment, but potentially degrading accuracy.
This work presents QFX, a novel trainable fixed-point quantization approach
that automatically learns the binary-point position during model training.
Additionally, we introduce a multiplier-free quantization strategy within QFX
to minimize DSP usage. QFX is implemented as a PyTorch-based library that
efficiently emulates fixed-point arithmetic, supported by FPGA HLS, in a
differentiable manner during backpropagation. With minimal effort, models
trained with QFX can readily be deployed through HLS, producing the same
numerical results as their software counterparts. Our evaluation shows that
compared to post-training quantization, QFX can quantize models trained with
element-wise layers quantized to fewer bits and achieve higher accuracy on both
CIFAR-10 and ImageNet datasets. We further demonstrate the efficacy of
multiplier-free quantization using a state-of-the-art binarized neural network
accelerator designed for an embedded FPGA (AMD Xilinx Ultra96 v2). We plan to
release QFX in open-source format.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要