A Note on Improved Loss Bounds for Multiple Kernel Learning

CoRR(2011)

引用 4|浏览10
暂无评分
摘要
In this paper, we correct an upper bound, presented in , on the generalisation error of classifiers learned through multiple kernel learning. The bound in  uses Rademacher complexity and has anadditive dependence on the logarithm of the number of kernels and the margin achieved by the classifier. However, there are some errors in parts of the proof which are corrected in this paper. Unfortunately, the final result turns out to be a risk bound which has a multiplicative dependence on the logarithm of the number of kernels and the margin achieved by the classifier.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要