RACKNet: Robust Allocation of Convolutional Kernels in Neural Networks for Image Classification.

Yash Garg, K. Selçuk Candan

ICMR '19: International Conference on Multimedia Retrieval Ottawa ON Canada June, 2019(2019)

引用 8|浏览31
暂无评分
摘要
Despite their impressive success when these hyper-parameters are suitably fine-tuned, the design of good network architectures remains an art-form rather than a science: while various search techniques, such as grid-search, have been proposed to find effective hyper-parameter configurations, often these parameters are hand-crafted (or the bounds of the search space are provided by a user). In this paper, we argue, and experimentally show, that we can minimize the need for hand-crafting, by relying on the dataset itself. In particular, we show that the dimensions, distributions, and complexities of localized features extracted from the data can inform the structure of the neural networks and help better allocate limited resources (such as kernels) to the various layers of the network. To achieve this, we first present several hypotheses that link the properties of the localized image features to the CNN and RCNN architectures and then, relying on these hypotheses, present a RACKNet framework which aims to learn multiple hyper-parameters by extracting information encoded in the input datasets. Experimental evaluations of RACKNet against major benchmark datasets, such as MNIST, SVHN, CIFAR10, COIL20 and ImageNet, show that RACKNet provides significant improvements in the network design and robustness to change in the network.
更多
查看译文
关键词
Convolutional neural networks, recurrent neural networks, deep learning, meta-learning, hyper-parameter optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要