Temporal Saliency Adaptation in Egocentric Videos.

arXiv: Computer Vision and Pattern Recognition(2018)

引用 23|浏览50
暂无评分
摘要
This work adapts a deep neural model for image saliencyprediction to the temporal domain of egocentric video. We compute thesaliency map for each video frame, firstly with an off-the-shelf modeltrained from static images, secondly by adding a a convolutional orconv-LSTM layers trained with a dataset for video saliency prediction.We study each configuration on EgoMon, a new dataset made of sevenegocentric videos recorded by three subjects in both free-viewing andtask-driven set ups. Our results indicate that the temporal adaptation isbeneficial when the viewer is not moving and observing the scene froma narrow field of view. Encouraged by this observation, we compute andpublish the saliency maps for the EPIC Kitchens dataset, in which view-ers are cooking.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要