A Penalty Aware Memory Allocation Scheme for Key-Value Cache.

ICPP(2015)

引用 12|浏览60
暂无评分
摘要
Key-value caches, represented by Mem cached, play a critical role in data centers. Its efficacy can significantly impact users' perceived service time and back-end systems' workloads. A central issue in the in-memory cache's management is memory allocation, or how the limited space is distributed for storing key-value items of various sizes. When a cache is full, the allocation issue is how to conduct replacement operations on items of different sizes. To effectively address the issue, a practitioner must simultaneously consider three factors, which are access locality, item size, and miss penalty. Existing designs consider only one or two of the first two factors, and pay little attention on miss penalty. This inadequacy can substantially compromise utilization of cache space and request service time. In this paper we propose a Penalty Aware Memory Allocation scheme (PAMA) that takes all three factors into account. While the three different factors cannot be directly compared to each other in a quantitative manner, PAMA uses their impacts on service time to determine where a unit of memory space should be (de)allocated. The impacts are quantified as the decrease (or increase) of service time if a unit of space is allocated (or deal located). PAMA efficiently tracks access pattern and use of memory, and speculatively evaluates the impacts to enable penalty-aware memory allocation for KV caches. Our evaluation with real-world Mem cached workload traces demonstrates that PAMA can significantly reduce request service time compared to other representative KV cache management schemes.
更多
查看译文
关键词
Key-value Cache, Replacement Algorithm, Locality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要