Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple Logits Retargeting Approach
CoRR(2024)
摘要
In the long-tailed recognition field, the Decoupled Training paradigm has
demonstrated remarkable capabilities among various methods. This paradigm
decouples the training process into separate representation learning and
classifier re-training. Previous works have attempted to improve both stages
simultaneously, making it difficult to isolate the effect of classifier
re-training. Furthermore, recent empirical studies have demonstrated that
simple regularization can yield strong feature representations, emphasizing the
need to reassess existing classifier re-training methods. In this study, we
revisit classifier re-training methods based on a unified feature
representation and re-evaluate their performances. We propose a new metric
called Logits Magnitude as a superior measure of model performance, replacing
the commonly used Weight Norm. However, since it is hard to directly optimize
the new metric during training, we introduce a suitable approximate invariant
called Regularized Standard Deviation. Based on the two newly proposed metrics,
we prove that reducing the absolute value of Logits Magnitude when it is nearly
balanced can effectively decrease errors and disturbances during training,
leading to better model performance. Motivated by these findings, we develop a
simple logits retargeting approach (LORT) without the requirement of prior
knowledge of the number of samples per class. LORT divides the original one-hot
label into small true label probabilities and large negative label
probabilities distributed across each class. Our method achieves
state-of-the-art performance on various imbalanced datasets, including
CIFAR100-LT, ImageNet-LT, and iNaturalist2018.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要