High dimensional analysis reveals conservative sharpening and a stochastic edge of stability
arxiv(2024)
摘要
Recent empirical and theoretical work has shown that the dynamics of the
large eigenvalues of the training loss Hessian have some remarkably robust
features across models and datasets in the full batch regime. There is often an
early period of progressive sharpening where the large eigenvalues increase,
followed by stabilization at a predictable value known as the edge of
stability. Previous work showed that in the stochastic setting, the eigenvalues
increase more slowly - a phenomenon we call conservative sharpening. We provide
a theoretical analysis of a simple high-dimensional model which shows the
origin of this slowdown. We also show that there is an alternative stochastic
edge of stability which arises at small batch size that is sensitive to the
trace of the Neural Tangent Kernel rather than the large Hessian eigenvalues.
We conduct an experimental study which highlights the qualitative differences
from the full batch phenomenology, and suggests that controlling the stochastic
edge of stability can help optimization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要