Multi-resource fair allocation for consolidated flash-based caching systems

Proceedings of the 23rd ACM/IFIP International Middleware Conference(2022)

引用 1|浏览12
暂无评分
摘要
Using a flash-based layer to serve the caching and buffering needs of multiple workloads has become a common practice. In such settings, resource demands will inevitably exceed available capacity sometimes. "Fair" resource allocation may offer a systematic way of partitioning resources across competing workloads during such periods of scarcity. Existing works only offer fair allocation strategies for a single resource (capacity or bandwidth) within a flash device in isolation. However, since there exist multiple critical resources that need to be partitioned within a flash device and they are correlated to each other, fair allocation of a single resource may result in a waste of other resource(s) or performance degradation of workload(s). To this end, we make a case for multi-resource fair allocation solutions for flash-based caches that consolidate multiple workloads. Furthermore, we argue that device lifetime, which depends on the behavior of running workloads, should also be considered as a first-class resource on par with capacity and bandwidth. Specifically, we build upon existing ideas related to dominant resource fairness (DRF) to devise flash-specific multi-resource fair algorithms: (i) n DRF, that jointly allocates capacity and bandwidth taking their non-linear relationship into account; (ii) ℓ DRF, that explicitly considers lifetime as well in its allocation; and (iii) several variants of these. Our experimental evaluation offers important findings: (i) both n DRF and ℓ DRF result in superior performance fairness compared to the state-of-the-art techniques that partition capacity in isolation; (ii) ℓ DRF additionally offers improved device "wear" behavior; and (iii) our algorithms combined with reasonable demand prediction work very well in online settings with workload dynamism and uncertainty.
更多
查看译文
关键词
solid-state drives, resource allocation, flash device lifetime
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要