A large-scale cross-architecture evaluation of thread-coarsening

High Performance Computing, Networking, Storage and Analysis(2013)

引用 91|浏览23
暂无评分
摘要
OpenCL has become the de-facto data parallel programming model for parallel devices in today's high-performance supercomputers. OpenCL was designed with the goal of guaranteeing program portability across hardware from different vendors. However, achieving good performance is hard, requiring manual tuning of the program and expert knowledge of each target device. In this paper we consider a data parallel compiler transformation --- thread-coarsening --- and evaluate its effects across a range of devices by developing a source-to-source OpenCL compiler based on LLVM. We thoroughly evaluate this transformation on 17 benchmarks and five platforms with different coarsening parameters giving over 43,000 different experiments. We achieve speedups over 9x on individual applications and average speedups ranging from 1.15x on the Nvidia Kepler GPU to 1.50x on the AMD Cypress GPU. Finally, we use statistical regression to analyse and explain program performance in terms of hardware-based performance counters.
更多
查看译文
关键词
graphics processing units,multi-threading,program compilers,regression analysis,software architecture,software performance evaluation,software portability,AMD Cypress GPU,LLVM,Nvidia Kepler GPU,data parallel compiler trans- formation,de-facto data parallel programming model,hardware-based performance counters,high-performance supercomputers,large-scale cross-architecture evaluation,program performance,program portability,source-to-source OpenCL compiler,statistical regression,thread-coarsening parameters,GPU,OpenCL,Regression trees,Thread coarsening
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要