Tenplex: Dynamic Parallelism for Deep Learning using Parallelizable Tensor Collections
arxiv(2023)
摘要
Deep learning (DL) jobs use multi-dimensional parallelism, i.e. combining
data, model, and pipeline parallelism, to use large GPU clusters efficiently.
Long-running jobs may experience changes to their GPU allocation: (i) resource
elasticity during training adds or removes GPUs; (ii) hardware maintenance may
require redeployment on different GPUs; and (iii) GPU failures force jobs to
run with fewer devices. Current DL frameworks tie jobs to a set of GPUs and
thus lack support for these scenarios. In particular, they cannot change the
multi-dimensional parallelism of an already-running job in an efficient and
model-independent way.
We describe Scalai, a state management library for DL systems that enables
jobs to change their parallelism dynamically after the GPU allocation is
updated at runtime. Scalai achieves this through a new abstraction, a
parallelizable tensor collection (PTC), that externalizes the job state during
training. After a GPU change, Scalai uses the PTC to transform the job state:
the PTC repartitions the dataset state under data parallelism and exposes it to
DL workers through a virtual file system; and the PTC obtains the model state
as partitioned checkpoints and transforms them to reflect the new
parallelization configuration. For efficiency, Scalai executes PTC
transformations in parallel with minimum data movement between workers. Our
experiments show that Scalai enables DL jobs to support dynamic parallelization
with low overhead.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要