Compliation Techniques for Graphs Algorithms on GPUs

arxiv(2020)

引用 0|浏览65
暂无评分
摘要
The performance of graph programs depends highly on the algorithm, size, and structure of the input graphs, as well as the features of the underlying hardware. No single set of optimizations or one hardware platform works well across all settings. To achieve high performance, the programmer must carefully select which set optimizations as well as the hardware platform to use. However, currently when switching between CPUs and GPUs, programmer must re-implement the entire algorithm with different optimizations in order to achieve high performance. We propose the GG GPU compiler, a new graph processing framework that achieves high performance on both CPUs and GPUs using the same algorithm specification. GG significantly expands the optimization space of GPU graph processing frameworks with a novel GPU scheduling language and compiler that enables combining load balancing, edge traversal direction, active vertex set creation, active vertex set processing ordering, and kernel fusion optimizations. GG also introduces two performance optimizations, Edge-based Thread Warps CTAs load balancing (ETWC) and EdgeBlocking, to expand the optimization space for GPUs. ETWC improves load balance by dynamically partitioning the edges of each vertex into blocks that are assigned to threads, warps, and CTAs for execution. EdgeBlocking improves the locality of the program by reordering the edges and restricting random memory accesses to fit within L2 cache. We evaluate GG on 5 algorithms and 9 input graphs on both Pascal and Volta generation NVIDIA GPUs, and show that it achieves up to 5.11x speedup over state-of-the-art GPU graph processing frameworks, and is the fastest on 62 out of the 90 experiment
更多
查看译文
关键词
graphs algorithms,compliation techniques,gpus
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要