On the Limitations of Targeted Adversarial Evasion Attacks Against Deep Learning Enabled Modulation Recognition

Proceedings of the ACM Workshop on Wireless Security and Machine Learning(2019)

引用 54|浏览296
暂无评分
摘要
Wireless communications has greatly benefited in recent years from advances in machine learning. A new subfield, commonly termed Radio Frequency Machine Learning (RFML), has emerged that has demonstrated the application of Deep Neural Networks to multiple spectrum sensing tasks such as modulation recognition and specific emitter identification. Yet, recent research in the RF domain has shown that these models are vulnerable to over-the-air adversarial evasion attacks, which seek to cause minimum harm to the underlying transmission to a cooperative receiver, while greatly lowering the performance of spectrum sensing tasks by an eavesdropper. While prior work has focused on untargeted evasion, which simply degrades classification accuracy, this paper focuses on targeted evasion attacks, which aim to masquerade as a specific signal of interest. The current work examines how a Convolutional Neural Network (CNN) based Automatic Modulation Classification (AMC) model breaks down in the presence of an adversary with direct access to its inputs. Specifically, the current work uses the adversarial perturbation power needed to change the classification from a specific source modulation to a specific target modulation as a proxy for the model's estimation of their similarity and compares this with the known hierarchy of these human engineered modulations. The findings conclude that the reference model breaks down in an intuitive way, which can have implications on progress towards hardening RFML models.
更多
查看译文
关键词
adversarial signal processing, cognitive radio security, machine learning, modulation classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要