Pruning ConvNets Online for Efficient Specialist Models

2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2017)

引用 17|浏览62
暂无评分
摘要
Convolutional neural networks (CNNs) excel in various computer vision related tasks but are extremely computationally intensive and power hungry to run on mobile and embedded devices. Recent pruning techniques can reduce the computation and memory requirements of CNNs, but a costly retraining step is needed to restore the classification accuracy of the pruned model. In this paper, we present evidence that when only a subset of the classes need to be classified, we could prune a model and achieve reasonable classification accuracy without retraining. The resulting specialist model will require less energy and time to run than the original full model. To compensate for the pruning, we take advantage of the redundancy among filters and class-specific features. We show that even simple methods such as replacing channels with mean or with the most correlated channel can boost the accuracy of the pruned model to reasonable levels.
更多
查看译文
关键词
online ConvNets pruning,convolutional neural networks,computer vision related tasks,mobile devices,embedded devices,classification accuracy,pruned model,class-specific features,filters,specialist models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要