Improving Deep Neural Network Performance With Kernelized Min-Max Objective

NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I(2018)

引用 2|浏览53
暂无评分
摘要
In this paper, we present a novel training strategy using kernelized Min-Max objective to enable improved object recognition performance on deep neural networks (DNN), e.g., convolutional neural networks (CNN). Without changing the other part of the original model, the kernelized Min-Max objective works by combining the kernel trick with the Min-Max objective and being embedded into a high layer of the networks in the training phase. The proposed kernelized objective explicitly enforces the learned object feature maps to maintain in a kernel space the least compactness for each category manifold and the biggest margin among different category manifolds. With very few additional computation costs, the proposed strategy can be widely used in different DNN models. Extensive experiments with shallow convolutional neural network model, deep convolutional neural network model, and deep residual neural network model on two benchmark datasets show that the proposed approach outperforms those competitive models.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要