Kernel methods match Deep Neural Networks on TIMIT

ICASSP(2014)

引用 161|浏览103
暂无评分
摘要
Despite their theoretical appeal and grounding in tractable convex optimization techniques, kernel methods are often not the first choice for large-scale speech applications due to their significant memory requirements and computational expense. In recent years, randomized approximate feature maps have emerged as an elegant mechanism to scale-up kernel methods. Still, in practice, a large number of random features is required to obtain acceptable accuracy in predictive tasks. In this paper, we develop two algorithmic schemes to address this computational bottleneck in the context of kernel ridge regression. The first scheme is a specialized distributed block coordinate descent procedure that avoids the explicit materialization of the feature space data matrix, while the second scheme gains efficiency by combining multiple weak random feature models in an ensemble learning framework. We demonstrate that these schemes enable kernel methods to match the performance of state of the art Deep Neural Networks on TIMIT for speech recognition and classification tasks. In particular, we obtain the best classification error rates reported on TIMIT using kernel methods.
更多
查看译文
关键词
optimisation,deep neural networks,kernel ridge regression,speech recognition,kernel methods,timit,ensemble learning framework,large-scale speech applications,learning (artificial intelligence),regression analysis,feature space data matrix,speech classification tasks,tractable convex optimization techniques,deep learning,random features,multiple weak random feature models,large-scale kernel machines,distributed computing,specialized distributed block coordinate descent procedure,randomized approximate feature maps,neural nets,hidden markov models,computational modeling,kernel,neural networks,training data,learning artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要