SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding
CoRR(2024)
摘要
As large language models (LLMs) become increasingly integrated into
real-world applications such as code generation and chatbot assistance,
extensive efforts have been made to align LLM behavior with human values,
including safety. Jailbreak attacks, aiming to provoke unintended and unsafe
behaviors from LLMs, remain a significant/leading LLM safety threat. In this
paper, we aim to defend LLMs against jailbreak attacks by introducing
SafeDecoding, a safety-aware decoding strategy for LLMs to generate helpful and
harmless responses to user queries. Our insight in developing SafeDecoding is
based on the observation that, even though probabilities of tokens representing
harmful contents outweigh those representing harmless responses, safety
disclaimers still appear among the top tokens after sorting tokens by
probability in descending order. This allows us to mitigate jailbreak attacks
by identifying safety disclaimers and amplifying their token probabilities,
while simultaneously attenuating the probabilities of token sequences that are
aligned with the objectives of jailbreak attacks. We perform extensive
experiments on five LLMs using six state-of-the-art jailbreak attacks and four
benchmark datasets. Our results show that SafeDecoding significantly reduces
the attack success rate and harmfulness of jailbreak attacks without
compromising the helpfulness of responses to benign user queries. SafeDecoding
outperforms six defense methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要