DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models
CoRR(2024)
摘要
In the exciting generative AI era, the diffusion model has emerged as a very
powerful and widely adopted content generation and editing tool for various
data modalities, making the study of their potential security risks very
necessary and critical. Very recently, some pioneering works have shown the
vulnerability of the diffusion model against backdoor attacks, calling for
in-depth analysis and investigation of the security challenges of this popular
and fundamental AI technique.
In this paper, for the first time, we systematically explore the
detectability of the poisoned noise input for the backdoored diffusion models,
an important performance metric yet little explored in the existing works.
Starting from the perspective of a defender, we first analyze the properties of
the trigger pattern in the existing diffusion backdoor attacks, discovering the
important role of distribution discrepancy in Trojan detection. Based on this
finding, we propose a low-cost trigger detection mechanism that can effectively
identify the poisoned input noise. We then take a further step to study the
same problem from the attack side, proposing a backdoor attack strategy that
can learn the unnoticeable trigger to evade our proposed detection scheme.
Empirical evaluations across various diffusion models and datasets
demonstrate the effectiveness of the proposed trigger detection and
detection-evading attack strategy. For trigger detection, our distribution
discrepancy-based solution can achieve a 100% detection rate for the Trojan
triggers used in the existing works. For evading trigger detection, our
proposed stealthy trigger design approach performs end-to-end learning to make
the distribution of poisoned noise input approach that of benign noise,
enabling nearly 100% detection pass rate with very high attack and benign
performance for the backdoored diffusion models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要