Addressing Topic Granularity and Hallucination in Large Language Models for Topic Modelling
arxiv(2024)
摘要
Large language models (LLMs) with their strong zero-shot topic extraction
capabilities offer an alternative to probabilistic topic modelling and
closed-set topic classification approaches. As zero-shot topic extractors, LLMs
are expected to understand human instructions to generate relevant and
non-hallucinated topics based on the given documents. However, LLM-based topic
modelling approaches often face difficulties in generating topics with
adherence to granularity as specified in human instructions, often resulting in
many near-duplicate topics. Furthermore, methods for addressing hallucinated
topics generated by LLMs have not yet been investigated. In this paper, we
focus on addressing the issues of topic granularity and hallucinations for
better LLM-based topic modelling. To this end, we introduce a novel approach
that leverages Direct Preference Optimisation (DPO) to fine-tune open-source
LLMs, such as Mistral-7B. Our approach does not rely on traditional human
annotation to rank preferred answers but employs a reconstruction pipeline to
modify raw topics generated by LLMs, thus enabling a fast and efficient
training and inference framework. Comparative experiments show that our
fine-tuning approach not only significantly improves the LLM's capability to
produce more coherent, relevant, and precise topics, but also reduces the
number of hallucinated topics.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要