Fast Deep Neural Network Based On Intelligent Dropout And Layer Skipping

2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2017)

引用 9|浏览39
暂无评分
摘要
Deep Convolutional Neural Network (DCNN) can be marked as a powerful tool for object and image classification. However, the training stage of such networks is highly consuming in terms of storage space and time. Also, the optimization is still a challenging subject. In this paper, we propose a fast DCNN based on smart dropout and layer skipping. The proposed approach led to improve the speed of the testing stage as well as image classification accuracy. This was possible thanks to three key advantages: First, the rapid way to compute the features using Fast Beta Wavelet Transform. Second, the proposed intelligent dropout method is based on whether or not a unit is efficiently and not randomly selected. Third, it is possible to classify the image using efficient units of earlier layer(s) and skipping all the subsequent hidden layers directly to the output layer. Our experiments were performed on CIFAR-10 and MNIST datasets and the obtained results are very promising.
更多
查看译文
关键词
Convolution Neural Network, Deep architecture, Intelligent dropout, Beta wavelet, Image classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要