HarmonyBatch: Batching multi-SLO DNN Inference with Heterogeneous Serverless Functions
arxiv(2024)
摘要
Deep Neural Network (DNN) inference on serverless functions is gaining
prominence due to its potential for substantial budget savings. Existing works
on serverless DNN inference solely optimize batching requests from one
application with a single Service Level Objective (SLO) on CPU functions.
However, production serverless DNN inference traces indicate that the request
arrival rate of applications is surprisingly low, which inevitably causes a
long batching time and SLO violations. Hence, there is an urgent need for
batching multiple DNN inference requests with diverse SLOs (i.e., multi-SLO DNN
inference) in serverless platforms. Moreover, the potential performance and
cost benefits of deploying heterogeneous (i.e., CPU and GPU) functions for DNN
inference have received scant attention.
In this paper, we present HarmonyBatch, a cost-efficient resource
provisioning framework designed to achieve predictable performance for
multi-SLO DNN inference with heterogeneous serverless functions. Specifically,
we construct an analytical performance and cost model of DNN inference on both
CPU and GPU functions, by explicitly considering the GPU time-slicing
scheduling mechanism and request arrival rate distribution. Based on such a
model, we devise a two-stage merging strategy in HarmonyBatch to judiciously
batch the multi-SLO DNN inference requests into application groups. It aims to
minimize the budget of function provisioning for each application group while
guaranteeing diverse performance SLOs of inference applications. We have
implemented a prototype of HarmonyBatch on Alibaba Cloud Function Compute.
Extensive prototype experiments with representative DNN inference workloads
demonstrate that HarmonyBatch can provide predictable performance to serverless
DNN inference workloads while reducing the monetary cost by up to 82.9
compared to the state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要