SoK: The Faults in our Graph Benchmarks

arxiv(2024)

引用 0|浏览2
暂无评分
摘要
Graph-structured data is prevalent in domains such as social networks, financial transactions, brain networks, and protein interactions. As a result, the research community has produced new databases and analytics engines to process such data. Unfortunately, there is not yet widespread benchmark standardization in graph processing, and the heterogeneity of evaluations found in the literature can lead researchers astray. Evaluations frequently ignore datasets' statistical idiosyncrasies, which significantly affect system performance. Scalability studies often use datasets that fit easily in memory on a modest desktop. Some studies rely on synthetic graph generators, but these generators produce graphs with unnatural characteristics that also affect performance, producing misleading results. Currently, the community has no consistent and principled manner with which to compare systems and provide guidance to developers who wish to select the system most suited to their application. We provide three different systematizations of benchmarking practices. First, we present a 12-year literary review of graph processing benchmarking, including a summary of the prevalence of specific datasets and benchmarks used in these papers. Second, we demonstrate the impact of two statistical properties of datasets that drastically affect benchmark performance. We show how different assignments of IDs to vertices, called vertex orderings, dramatically alter benchmark performance due to the caching behavior they induce. We also show the impact of zero-degree vertices on the runtime of benchmarks such as breadth-first search and single-source shortest path. We show that these issues can cause performance to change by as much as 38 several popular graph processing systems. Finally, we suggest best practices to account for these issues when evaluating graph systems.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要