Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding

IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)(2015)

引用 757|浏览638
暂无评分
摘要
Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark. In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains. Our results show that the RNN-based models outperform the conditional random field (CRF) baseline by 2% in absolute error reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5% in the Entertainment domain, and 6.7% for the movies domain.
更多
查看译文
关键词
recurrent neural nets,speech recognition,elman architecture,jordan architecture,theano neural network toolkit,airline travel information system,recurrent neural network,semantic slot filling,spoken language understanding,recurrent neural network (rnn),slot filling,spoken language understanding (slu),word embedding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要