Less is More for Improving Automatic Evaluation of Factual Consistency
arxiv(2024)
摘要
Assessing the factual consistency of automatically generated texts in
relation to source context is crucial for developing reliable natural language
generation applications. Recent literature proposes AlignScore which uses a
unified alignment model to evaluate factual consistency and substantially
outperforms previous methods across many benchmark tasks. In this paper, we
take a closer look of datasets used in AlignScore and uncover an unexpected
finding: utilizing a smaller number of data points can actually improve
performance. We process the original AlignScore training dataset to remove
noise, augment with robustness-enhanced samples, and utilize a subset
comprising 10% of the data to train an improved factual consistency evaluation
model, we call LIM-RA (Less Is More for Robust AlignScore). LIM-RA demonstrates
superior performance, consistently outperforming AlignScore and other strong
baselines like ChatGPT across four benchmarks (two utilizing traditional
natural language generation datasets and two focused on large language model
outputs). Our experiments show that LIM-RA achieves the highest score on 24 of
the 33 test datasets, while staying competitive on the rest, establishing the
new state-of-the-art benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要