Stacking as Accelerated Gradient Descent
CoRR(2024)
摘要
Stacking, a heuristic technique for training deep residual networks by
progressively increasing the number of layers and initializing new layers by
copying parameters from older layers, has proven quite successful in improving
the efficiency of training deep neural networks. In this paper, we propose a
theoretical explanation for the efficacy of stacking: viz., stacking implements
a form of Nesterov's accelerated gradient descent. The theory also covers
simpler models such as the additive ensembles constructed in boosting methods,
and provides an explanation for a similar widely-used practical heuristic for
initializing the new classifier in each round of boosting. We also prove that
for certain deep linear residual networks, stacking does provide accelerated
training, via a new potential function analysis of the Nesterov's accelerated
gradient method which allows errors in updates. We conduct proof-of-concept
experiments to validate our theory as well.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要