Convolutional Neural Network and Convex Optimization

semanticscholar(2014)

引用 1|浏览0
暂无评分
摘要
This report shows that the performance of deep convolutional neural network can be improved by incorporating convex optimization techniques. First, we find that the sub-models learned by dropout can be more effectively combined by solving a convex problem. Also, we generalize this idea to models that are not trained by dropout. Compared to traditional methods, we get an improvement of 0.22% and 0.76% test accuracy on CIFAR10 dataset. Second, we investigate the performance for different loss functions borrowed from the convex optimization community and find that selecting loss functions matters a lot. We also implement a novel loss based on the idea of One-VersusOne SVM, which has never been explored in the literature. Experiment shows that it can give performance comparable to the standard cross-entropy loss, without being fully tuned.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要