Boter: Bootstrapping Knowledge Selection and Question Answering for Knowledge-based VQA
arxiv(2024)
摘要
Knowledge-based Visual Question Answering (VQA) requires models to
incorporate external knowledge to respond to questions about visual content.
Previous methods mostly follow the "retrieve and generate" paradigm. Initially,
they utilize a pre-trained retriever to fetch relevant knowledge documents,
subsequently employing them to generate answers. While these methods have
demonstrated commendable performance in the task, they possess limitations: (1)
they employ an independent retriever to acquire knowledge solely based on the
similarity between the query and knowledge embeddings, without assessing
whether the knowledge document is truly conducive to helping answer the
question; (2) they convert the image into text and then conduct retrieval and
answering in natural language space, which may not ensure comprehensive
acquisition of all image information. To address these limitations, we propose
Boter, a novel framework designed to bootstrap knowledge selection and question
answering by leveraging the robust multimodal perception capabilities of the
Multimodal Large Language Model (MLLM). The framework consists of two modules:
Selector and Answerer, where both are initialized by the MLLM and
parameter-efficiently finetuned in a simple cycle: find key knowledge in the
retrieved knowledge documents using the Selector, and then use them to finetune
the Answerer to predict answers; obtain the pseudo-labels of key knowledge
documents based on the predictions of the Answerer and weak supervision labels,
and then finetune the Selector to select key knowledge; repeat. Our framework
significantly enhances the performance of the baseline on the challenging
open-domain Knowledge-based VQA benchmark, OK-VQA, achieving a state-of-the-art
accuracy of 62.83
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要