A Critical Review of Large Language Model on Software Engineering: An Example from ChatGPT and Automated Program Repair
arxiv(2023)
摘要
Large Language Models (LLMs) have been gaining increasing attention and
demonstrated promising performance across a variety of Software Engineering
(SE) tasks, such as Automated Program Repair (APR), code summarization, and
code completion. For example, ChatGPT, the latest black-box LLM, has been
investigated by numerous recent research studies and has shown impressive
performance in various tasks. However, there exists a potential risk of data
leakage since these LLMs are usually close-sourced with unknown specific
training details, e.g., pre-training datasets.
In this paper, we seek to review the bug-fixing capabilities of ChatGPT on a
clean APR benchmark with different research objectives. We first introduce
, a new benchmark with buggy and the corresponding fixed programs
from competitive programming problems starting from 2023, after the training
cutoff point of ChatGPT. The results on show that ChatGPT is able
to fix 109 out of 151 buggy programs using the basic prompt within 35
independent rounds, outperforming state-of-the-art LLMs CodeT5 and PLBART by
27.5% and 62.4% prediction accuracy. We also investigate the impact of three
types of prompts, i.e., problem description, error feedback, and bug
localization, leading to additional 34 fixed bugs. Besides, we provide
additional discussion from the interactive nature of ChatGPT to illustrate the
capacity of a dialog-based repair workflow with 9 additional fixed bugs.
Inspired by the findings, we further pinpoint various challenges and
opportunities for advanced SE study equipped with such LLMs (e.g., ChatGPT) in
the near future. More importantly, our work calls for more research on the
reevaluation of the achievements obtained by existing black-box LLMs across
various SE tasks, not limited to ChatGPT on APR.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要