Unit Test Generation using Generative AI : A Comparative Performance Analysis of Autogeneration Tools
CoRR(2023)
摘要
Generating unit tests is a crucial task in software development, demanding
substantial time and effort from programmers. The advent of Large Language
Models (LLMs) introduces a novel avenue for unit test script generation. This
research aims to experimentally investigate the effectiveness of LLMs,
specifically exemplified by ChatGPT, for generating unit test scripts for
Python programs, and how the generated test cases compare with those generated
by an existing unit test generator (Pynguin). For experiments, we consider
three types of code units: 1) Procedural scripts, 2) Function-based modular
code, and 3) Class-based code. The generated test cases are evaluated based on
criteria such as coverage, correctness, and readability. Our results show that
ChatGPT's performance is comparable with Pynguin in terms of coverage, though
for some cases its performance is superior to Pynguin. We also find that about
a third of assertions generated by ChatGPT for some categories were incorrect.
Our results also show that there is minimal overlap in missed statements
between ChatGPT and Pynguin, thus, suggesting that a combination of both tools
may enhance unit test generation performance. Finally, in our experiments,
prompt engineering improved ChatGPT's performance, achieving a much higher
coverage.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要