Masked Proxy Loss for Text-Independent Speaker Verification.

Interspeech(2021)

引用 2|浏览39
暂无评分
摘要
Open-set speaker recognition can be regarded as a metric learning problem, which is to maximize inter-class variance and minimize intra-class variance. Supervised metric learning can be categorized into entity-based learning and proxy-based learning\protect\footnote{Different from the definition in \cite{Proxyanchor}, we adopt the concept of entity-based learning rather than pair-based learning to illustrate the data-to-data relationship. Entity refers to real data point.}. Most of existing metric learning objectives like Contrastive, Triplet, Prototypical, GE2E, etc all belong to the former division, the performance of which is either highly dependent on sample mining strategy or restricted by insufficient label information in the mini-batch. Proxy-based losses mitigate both shortcomings, however, fine-grained connections among entities are either not or indirectly leveraged. This paper proposes a Mask Proxy (MP) loss which directly incorporates both proxy-based relationship and entity-based relationship. We further propose Multinomial Mask Proxy (MMP) loss to leverage the hardness of entity-to-entity pairs. These methods have been applied to evaluate on VoxCeleb test set and reach state-of-the-art Equal Error Rate(EER).
更多
查看译文
关键词
speaker recognition,deep metric learning,masked proxy,fine-grained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要