Toward Inference-optimal Mixture-of-Expert Large Language Models
arxiv(2024)
摘要
Mixture-of-Expert (MoE) based large language models (LLMs), such as the
recent Mixtral and DeepSeek-MoE, have shown great promise in scaling model size
without suffering from the quadratic growth of training cost of dense
transformers. Like dense models, training MoEs requires answering the same
question: given a training budget, what is the optimal allocation on the model
size and number of tokens? We study the scaling law of MoE-based LLMs regarding
the relations between the model performance, model size, dataset size, and the
expert degree. Echoing previous research studying MoE in different contexts, we
observe the diminishing return of increasing the number of experts, but this
seems to suggest we should scale the number of experts until saturation, as the
training cost would remain constant, which is problematic during inference
time. We propose to amend the scaling law of MoE by introducing inference
efficiency as another metric besides the validation loss. We find that MoEs
with a few (4/8) experts are the most serving efficient solution under the same
performance, but costs 2.5-3.5x more in training. On the other hand, training a
(16/32) expert MoE much smaller (70-85
with a larger training dataset is a promising setup under a training budget.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要