Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
arxiv(2024)
摘要
Large language models (LLMs) have recently achieved state-of-the-art
performance across various tasks, yet due to their large computational
requirements, they struggle with strict latency and power demands. Deep neural
network (DNN) quantization has traditionally addressed these limitations by
converting models to low-precision integer formats. Yet recently alternative
formats, such as Normal Float (NF4), have been shown to consistently increase
model accuracy, albeit at the cost of increased chip area. In this work, we
first conduct a large-scale analysis of LLM weights and activations across 30
networks to conclude most distributions follow a Student's t-distribution. We
then derive a new theoretically optimal format, Student Float (SF4), with
respect to this distribution, that improves over NF4 across modern LLMs, for
example increasing the average accuracy on LLaMA2-7B by 0.76
Using this format as a high-accuracy reference, we then propose augmenting E2M1
with two variants of supernormal support for higher model accuracy. Finally, we
explore the quality and performance frontier across 11 datatypes, including
non-traditional formats like Additive-Powers-of-Two (APoT), by evaluating their
model accuracy and hardware complexity. We discover a Pareto curve composed of
INT4, E2M1, and E2M1 with supernormal support, which offers a continuous
tradeoff between model accuracy and chip area. For example, E2M1 with
supernormal support increases the accuracy of Phi-2 by up to 2.19
area overhead, enabling more LLM-based applications to be run at four bits.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要