A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation
CoRR(2024)
摘要
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its
remarkable zero-shot capacity. Recent research has focused on developing
efficient fine-tuning methods, such as prompt learning and adapter, to enhance
CLIP's performance in downstream tasks. However, these methods still require
additional training time and computational resources, which is undesirable for
devices with limited resources. In this paper, we revisit a classical
algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream
classification of CLIP. Typically, GDA assumes that features of each class
follow Gaussian distributions with identical covariance. By leveraging Bayes'
formula, the classifier can be expressed in terms of the class means and
covariance, which can be estimated from the data without the need for training.
To integrate knowledge from both visual and textual modalities, we ensemble it
with the original zero-shot classifier within CLIP. Extensive results on 17
datasets validate that our method surpasses or achieves comparable results with
state-of-the-art methods on few-shot classification, imbalanced learning, and
out-of-distribution generalization. In addition, we extend our method to
base-to-new generalization and unsupervised learning, once again demonstrating
its superiority over competing approaches. Our code is publicly available at
.
更多查看译文
关键词
CLIP,training-free adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要