Supervised non-euclidean sparse NMF via bilevel optimization with applications to speech enhancement

Hands-free Speech Communication and Microphone Arrays(2014)

引用 48|浏览31
暂无评分
摘要
Traditionally, NMF algorithms consist of two separate stages: a training stage, in which a generative model is learned; and a testing stage in which the pre-learned model is used in a high level task such as enhancement, separation, or classification. As an alternative, we propose a task-supervised NMF method for the adaptation of the basis spectra learned in the first stage to enhance the performance on the specific task used in the second stage. We cast this problem as a bilevel optimization program that can be efficiently solved via stochastic gradient descent. The proposed approach is general enough to handle sparsity priors of the activations, and allow non-Euclidean data terms such as β-divergences. The framework is evaluated on single-channel speech enhancement tasks.
更多
查看译文
关键词
gradient methods,matrix decomposition,optimisation,speech enhancement,β-divergences,basis spectra,bilevel optimization program,generative model,noneuclidean data terms,single-channel speech enhancement tasks,sparsity priors,stochastic gradient descent,supervised noneuclidean sparse nmf,task-supervised nmf method,training stage,nmf,supervised learning,bilevel,tast-specific learning,dictionaries,speech,optimization,noise
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要