Retrieval Enhanced Zero-Shot Video Captioning
arxiv(2024)
摘要
Despite the significant progress of fully-supervised video captioning,
zero-shot methods remain much less explored. In this paper, we propose to take
advantage of existing pre-trained large-scale vision and language models to
directly generate captions with test time adaptation. Specifically, we bridge
video and text using three key models: a general video understanding model
XCLIP, a general image understanding model CLIP, and a text generation model
GPT-2, due to their source-code availability. The main challenge is how to
enable the text generation model to be sufficiently aware of the content in a
given video so as to generate corresponding captions. To address this problem,
we propose using learnable tokens as a communication medium between frozen
GPT-2 and frozen XCLIP as well as frozen CLIP. Differing from the conventional
way to train these tokens with training data, we update these tokens with
pseudo-targets of the inference data under several carefully crafted loss
functions which enable the tokens to absorb video information catered for
GPT-2. This procedure can be done in just a few iterations (we use 16
iterations in the experiments) and does not require ground truth data.
Extensive experimental results on three widely used datasets, MSR-VTT, MSVD,
and VATEX, show 4
compared to the existing state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要