LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild

2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)(2019)

引用 169|浏览396
暂无评分
摘要
Large-scale datasets have successively proven their fundamental importance in several research fields, especially for early progress in some emerging topics. In this paper, we focus on the problem of visual speech recognition, also known as lip-reading, which has received increasing interest in recent years. We present a naturally-distributed large-scale benchmark for lip-reading in the wild, named LRW-1000, which contains 1,000 classes with 718,018 samples from more than 2,000 individual speakers. Each class corresponds to the syllables of a Mandarin word composed of one or several Chinese characters. To the best of our knowledge, it is currently the largest word-level lipreading dataset and also the only public large-scale Mandarin lip-reading dataset. This dataset aims at covering a "natural" variability over different speech modes and imaging conditions to incorporate challenges encountered in practical applications. It has shown a large variation in this benchmark in several aspects, including the number of samples in each class, video resolution, lighting conditions, and speakers' attributes such as pose, age, gender, and make-up. Besides providing a detailed description of the dataset and its collection pipeline, we evaluate several typical popular lip-reading methods and perform a thorough analysis of the results from several aspects. The results demonstrate the consistency and challenges of our dataset, which may open up some new promising directions for future work.
更多
查看译文
关键词
natural variability,visual speech recognition,speaker make-up,speaker gender,speaker age,speaker pose,speaker attributes,lighting conditions,video resolution,imaging conditions,Chinese characters,Mandarin word syllables,naturally-distributed large-scale benchmark,lip-reading methods,Mandarin lip-reading dataset,LRW-1000,speech modes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要