Allo: A Programming Model for Composable Accelerator Design
CoRR(2024)
摘要
Special-purpose hardware accelerators are increasingly pivotal for sustaining
performance improvements in emerging applications, especially as the benefits
of technology scaling continue to diminish. However, designers currently lack
effective tools and methodologies to construct complex, high-performance
accelerator architectures in a productive manner. Existing high-level synthesis
(HLS) tools often require intrusive source-level changes to attain satisfactory
quality of results. Despite the introduction of several new accelerator design
languages (ADLs) aiming to enhance or replace HLS, their advantages are more
evident in relatively simple applications with a single kernel. Existing ADLs
prove less effective for realistic hierarchical designs with multiple kernels,
even if the design hierarchy is flattened.
In this paper, we introduce Allo, a composable programming model for
efficient spatial accelerator design. Allo decouples hardware customizations,
including compute, memory, communication, and data type from algorithm
specification, and encapsulates them as a set of customization primitives. Allo
preserves the hierarchical structure of an input program by combining
customizations from different functions in a bottom-up, type-safe manner. This
approach facilitates holistic optimizations that span across function
boundaries. We conduct comprehensive experiments on commonly-used HLS
benchmarks and several realistic deep learning models. Our evaluation shows
that Allo can outperform state-of-the-art HLS tools and ADLs on all test cases
in the PolyBench. For the GPT2 model, the inference latency of the Allo
generated accelerator is 1.7x faster than the NVIDIA A100 GPU with 5.4x higher
energy efficiency, demonstrating the capability of Allo to handle large-scale
designs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要