Adversarial Attacks on Scene Graph Generation.

IEEE Trans. Inf. Forensics Secur.(2024)

引用 0|浏览13
暂无评分
摘要
Scene graph generation (SGG) effectively improves semantic understanding of the visual world. However, the recent interest of researchers focuses on enhancing SGG in non-adversarial settings, which raises our curiosity about the adversarial robustness of SGG models. To bridge this gap, we perform adversarial attacks on two typical SGG tasks, Scene Graph Detection (SGDet) and Scene Graph Classification (SGCls). Specifically, we initially propose a bounding box relabeling method to reconstruct reasonable attack targets for SGCls. It solves the inconsistency between the specified bounding boxes and the scene graphs selected as attack targets. Subsequently, we introduce a two-step weighted attack by removing the predicted objects and relational triples that affect attack performance, which significantly increases the success rate of adversarial attacks on two SGG tasks. Extensive experiments demonstrate the effectiveness of our methods on five popular SGG models and four adversarial attacks. The Pytorch® implementation can be downloaded from an open-source Github project https://github.com/Dlut-lab-zmn/SGG_Attack .
更多
查看译文
关键词
Scene graph generation,adversarial attack,bounding box relabeling,two-step weighted attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要