Prototypical Knowledge Distillation for Noise Robust Keyword Spotting

IEEE SIGNAL PROCESSING LETTERS(2022)

引用 1|浏览2
暂无评分
摘要
Keyword Spotting (KWS) is an essential component in contemporary audio-based deep learning systems and should be of minimal design when the system is working in streaming and on-device environments. We presented a robust feature extraction with a single-layer dynamic convolution model in our previous work. In this letter, we expand our earlier study into multi-layers of operation and propose a robust Knowledge Distillation (KD) learning method. Based on the distribution between class-centroids and embedding vectors, we compute three distinct distance metrics for the KD training and feature extraction processes. The results indicate that our KD method shows similar KWS performance over state-of-the-art models in terms of KWS but with low computational costs. Furthermore, our proposed method results in a more robust performance in noisy environments than conventional KD methods.
更多
查看译文
关键词
Keyword spotting,knowledge distillation,prototypical learning. features trained the
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要