VulLibGen: Generating Names of Vulnerability-Affected Packages via a Large Language Model
arxiv(2023)
摘要
Security practitioners maintain vulnerability reports (e.g., GitHub Advisory)
to help developers mitigate security risks. An important task for these
databases is automatically extracting structured information mentioned in the
report, e.g., the affected software packages, to accelerate the defense of the
vulnerability ecosystem.
However, it is challenging for existing work on affected package
identification to achieve a high accuracy. One reason is that all existing work
focuses on relatively smaller models, thus they cannot harness the knowledge
and semantic capabilities of large language models.
To address this limitation, we propose VulLibGen, the first method to use LLM
for affected package identification. In contrast to existing work, VulLibGen
proposes the novel idea to directly generate the affected package. To improve
the accuracy, VulLibGen employs supervised fine-tuning (SFT), retrieval
augmented generation (RAG) and a local search algorithm. The local search
algorithm is a novel postprocessing algorithm we introduce for reducing the
hallucination of the generated packages. Our evaluation results show that
VulLibGen has an average accuracy of 0.806 for identifying vulnerable packages
in the four most popular ecosystems in GitHub Advisory (Java, JS, Python, Go)
while the best average accuracy in previous work is 0.721. Additionally,
VulLibGen has high value to security practice: we submitted 60 pairs to GitHub Advisory (covers four ecosystems). 34 of them
have been accepted and merged and 20 are pending approval. Our code and dataset
can be found in the attachments.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要