Taming the Killer Microsecond.

MICRO(2018)

引用 19|浏览93
暂无评分
摘要
Modern applications require access to vast datasets at low latencies. Emerging memory technologies can enable faster access to significantly larger volumes of data than what is possible today. However, these memory technologies have a significant caveat: their random access latency falls in a range that cannot be effectively hidden using current hardware and software latency-hiding techniques---namely, the microsecond range. Finding the root cause of this "Killer Microsecond" problem, is the subject of this work. Our goal is to answer the critical question of why existing hardware and software cannot hide microsecond-level latencies, and whether drastic changes to existing platforms are necessary to utilize microsecond-latency devices effectively. We use an FPGA-based microsecond-latency device emulator, a carefully-crafted microbenchmark, and three open-source data-intensive applications to show that existing systems are indeed incapable of effectively hiding such latencies. However, after uncovering the root causes of the problem, we show that simple changes to existing systems are sufficient to support microsecond-latency devices. In particular, we show that by replacing on-demand memory accesses with prefetch requests followed by fast user-mode context switches (to increase access-level parallelism) and enlarging hardware queues that track in-flight accesses (to accommodate many parallel accesses), conventional architectures can effectively hide microsecond-level latencies, and approach the performance of DRAM-based implementations of the same applications. In other words, we show that successful usage of microsecond-level devices is not predicated on drastically new hardware and software architectures.
更多
查看译文
关键词
FPGA, data-intensive applications, emerging storage, killer microseconds
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要