Enhancing Trust in LLM-Generated Code Summaries with Calibrated Confidence Scores
arxiv(2024)
摘要
A good summary can often be very useful during program comprehension. While a
brief, fluent, and relevant summary can be helpful, it does require significant
human effort to produce. Often, good summaries are unavailable in software
projects, thus making maintenance more difficult. There has been a considerable
body of research into automated AI-based methods, using Large Language models
(LLMs), to generate summaries of code; there also has been quite a bit work on
ways to measure the performance of such summarization methods, with special
attention paid to how closely these AI-generated summaries resemble a summary a
human might have produced. Measures such as BERTScore and BLEU have been
suggested and evaluated with human-subject studies.
However, LLMs often err and generate something quite unlike what a human
might say. Given an LLM-produced code summary, is there a way to gauge whether
it's likely to be sufficiently similar to a human produced summary, or not? In
this paper, we study this question, as a calibration problem: given a summary
from an LLM, can we compute a confidence measure, which is a good indication of
whether the summary is sufficiently similar to what a human would have produced
in this situation? We examine this question using several LLMs, for several
languages, and in several different settings. We suggest an approach which
provides well-calibrated predictions of likelihood of similarity to human
summaries.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要