Modeling I/O performance variability in high-performance computing systems using mixture distributions

Journal of Parallel and Distributed Computing(2020)

引用 9|浏览92
暂无评分
摘要
Performance variability is an important factor of high-performance computing (HPC) systems. HPC performance variability is often complex because its sources interact and are distributed throughout the system stack. For example, the performance variability of I/O throughput can be affected by factors such as CPU frequency, the number of I/O threads, file size, and record size. In this paper, we focus on the I/O throughput variability across multiple executions of a benchmark program. For a given system configuration, the distribution of throughputs from run to run is of interest. We conduct large-scale experiments and collect a massive amount of data to study the distribution of I/O throughput under tens of thousands of system configurations. Despite normality often being assumed in the literature, our statistical analysis reveals that the performance variability is not normally distributed under most system configurations. Instead, multimodal distributions are common for many system configurations. We propose the use of mixture distributions to describe the multimodal behavior. Various underlying parametric distributions such as normal, gamma, and the Weibull are considered. We apply an expectation–maximization (EM) algorithm for parameter estimation and use the Bayesian information criterion (BIC) for parametric model selections. We also illustrate how to use the estimated mixture distribution to calculate the number of runs needed for future experiments on variability analysis. The paper provides a useful tool set in studying the behavior of performance variability.
更多
查看译文
关键词
I/O variability,Performance analysis,Mixture model,Uncertainty quantification,EM algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要