Locality-Aware PMI Usage for Efficient MPI Startup

ieee international conference computer and communications(2018)

引用 2|浏览3
暂无评分
摘要
In this paper, we examine usage of the Process Management Interface (PMI) during MPI_Init. Specifically, how PMI is used to exchange address information between peer processes in an MPI job. As node and core counts continue to increase in HPC systems, so does the amount of address data processes need to exchange. We show how by applying well-established locality-awareness techniques, we can significantly reduce the time spent in MPI_Init. We first present the use of shared memory to reduce the total amount of information retrieved from PMI. Next, by combining shared memory with a minimally connected set of processes, we further reduce the dependence on PMI, and employ the HPC fabric to transfer the bulk of address data. Our approach is low impact, relying on functionality already provided by MPI libraries and process managers, instead of new APIs and capabilities.
更多
查看译文
关键词
MPI message passing,process management,PMI,HPC,high-performance computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要