Accelerating Convolutional Neural Networks by Exploiting the Sparsity of Output Activation

IEEE Transactions on Parallel and Distributed Systems(2023)

引用 0|浏览12
暂无评分
摘要
Deep Convolutional Neural Networks (CNNs) are the most widely used family of machine learning methods that have had a transformative effect on a wide range of applications. Previous studies have made great breakthroughs in accelerating CNNs, but they only target on the input sparsity of activation and weight, thus do not eliminate the unnecessary computations due to the fact that more zeros in the output results are not directly caused by the zero-valued positions of the input data. In this paper, we take advantage of the output activation sparsity to reduce the execution time and energy consumption of CNNs. First, we propose an effective prediction method that leverages the output activation sparsity. Our method first predicts the output activation polarity of convolutional layers based on the singular value decomposition (SVD) approach. Then, it uses the predicted negative value to skip invalid computations. Second, an effective accelerator is designed to take advantage of sparsity to achieve CNN inference acceleration. Each PE is equipped with a prediction unit and a non-zero value detection unit to remove invalid computation blocks. And an instruction bypass technique is proposed which further exploits the sparsity of the weights. The efficient dataflow graph mapping approach and pipeline execution ensure high computational resource utilization. Experiments show that our approach achieves up to 1.63× speedup and 55.30% energy reduction compared with dense networks with a slight loss of accuracy. Compared with Eyeriss, our accelerator achieves on average 1.31 × performance improvement and 54% energy reduction. Our accelerator also achieves a similar performance to SnaPEA, but with a better energy efficiency.
更多
查看译文
关键词
Accelerator,output activation,prediction,sparse convolutional neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要