Configuration Validation with Large Language Models
arxiv(2023)
摘要
Misconfigurations are major causes of software failures. Existing practices
rely on developer-written rules or test cases to validate configurations, which
are expensive. Machine learning (ML) for configuration validation is considered
a promising direction, but has been facing challenges such as the need of
large-scale field data and system-specific models. Recent advances in Large
Language Models (LLMs) show promise in addressing some of the long-lasting
limitations of ML-based configuration validation. We present a first analysis
on the feasibility and effectiveness of using LLMs for configuration
validation. We empirically evaluate LLMs as configuration validators by
developing a generic LLM-based configuration validation framework, named Ciri.
Ciri employs effective prompt engineering with few-shot learning based on both
valid configuration and misconfiguration data. Ciri checks outputs from LLMs
when producing results, addressing hallucination and nondeterminism of LLMs. We
evaluate Ciri's validation effectiveness on eight popular LLMs using
configuration data of ten widely deployed open-source systems. Our analysis (1)
confirms the potential of using LLMs for configuration validation, (2) explores
design space of LLMbased validators like Ciri, and (3) reveals open challenges
such as ineffectiveness in detecting certain types of misconfigurations and
biases towards popular configuration parameters.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要