Evaluating LLMs at Detecting Errors in LLM Responses
arxiv(2024)
摘要
With Large Language Models (LLMs) being widely used across various tasks,
detecting errors in their responses is increasingly crucial. However, little
research has been conducted on error detection of LLM responses. Collecting
error annotations on LLM responses is challenging due to the subjective nature
of many NLP tasks, and thus previous research focuses on tasks of little
practical value (e.g., word sorting) or limited error types (e.g., faithfulness
in summarization). This work introduces ReaLMistake, the first error detection
benchmark consisting of objective, realistic, and diverse errors made by LLMs.
ReaLMistake contains three challenging and meaningful tasks that introduce
objectively assessable errors in four categories (reasoning correctness,
instruction-following, context-faithfulness, and parameterized knowledge),
eliciting naturally observed and diverse errors in responses of GPT-4 and Llama
2 70B annotated by experts. We use ReaLMistake to evaluate error detectors
based on 12 LLMs. Our findings show: 1) Top LLMs like GPT-4 and Claude 3 detect
errors made by LLMs at very low recall, and all LLM-based error detectors
perform much worse than humans. 2) Explanations by LLM-based error detectors
lack reliability. 3) LLMs-based error detection is sensitive to small changes
in prompts but remains challenging to improve. 4) Popular approaches to
improving LLMs, including self-consistency and majority vote, do not improve
the error detection performance. Our benchmark and code are provided at
https://github.com/psunlpgroup/ReaLMistake.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要