Implementing and Optimizing the Scaled Dot-Product Attention on Streaming Dataflow
arxiv(2024)
摘要
Transformer models serve as the backbone of many state-ofthe-art language
models, and most use the scaled dot-product attention (SDPA) mechanism to
capture relationships between tokens. However, the straightforward
implementation of SDPA has quadratic compute and memory complexity with respect
to the sequence length. On processor architectures such as GPUs and TPUs, there
is a robust body of prior work. However, little work has been performed on
non-processor architectures.In this work, we show how the architecture and
execution model of Streaming Dataflow Accelerators can help tackle this
challenge. We first define abstract hardware that adopts a streaming execution
model, and we implement a cycle-accurate simulator of the abstract hardware
using the Dataflow Abstract Machine simulation framework. Second, we implement
the naive SDPA algorithm on this abstract hardware and show it requires linear
(O(N)) intermediate memory. Third, we then modify the naive algorithm, taking
inspiration from prior processor-oriented works, by reordering the
multiplication and division operations. Finally, we map the modified algorithm
to abstract hardware, and confirm that the implementation computes SDPA at full
throughput while only using a constant amount (O(1)) of intermediate memory.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要