Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation Engineering
CoRR(2024)
摘要
Jailbreaking techniques aim to probe the boundaries of safety in large
language models (LLMs) by inducing them to generate toxic responses to
malicious queries, a significant concern within the LLM community. While
existing jailbreaking methods primarily rely on prompt engineering, altering
inputs to evade LLM safety mechanisms, they suffer from low attack success
rates and significant time overheads, rendering them inflexible. To overcome
these limitations, we propose a novel jailbreaking approach, named Jailbreaking
LLMs through Representation Engineering (JRE). Our method requires only a small
number of query pairs to extract “safety patterns” that can be used to
circumvent the target model's defenses, achieving unprecedented jailbreaking
performance. Building upon these findings, we also introduce a novel defense
framework inspired by JRE principles, which demonstrates notable effectiveness.
Extensive experimentation confirms the superior performance of the JRE attacks
and the robustness of the JRE defense framework. We hope this study contributes
to advancing the understanding of model safety issues through the lens of
representation engineering.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要