Blockout: Dynamic Model Selection for Hierarchical Deep Networks

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2015)

引用 63|浏览57
暂无评分
摘要
Most deep architectures for image classification--even those that are trained to classify a large number of diverse categories--learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and ImageNet datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures.
更多
查看译文
关键词
Blockout,dynamic model selection,image classification,shared image representations categories-learn,regularization task,model selection,model architecture learning,parameter learning,Dropout,hierarchical architecture parametrization,structure learning,back-propagation,CIFAR dataset,ImageNet dataset,classification accuracy improvement,regularization performance improvement,hierarchical network structures
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要