Deep Segment Attentive Embedding for Duration Robust Speaker Verification

2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)(2019)

引用 0|浏览34
暂无评分
摘要
Deep learning based speaker verification usually uses a fixed-length local segment randomly truncated from an utterance to learn the utterance-level speaker embedding, while using the average embedding of all segments of a test utterance to verify the speaker, which results in a critical mismatch between testing and training. This mismatch degrades the performance of speaker verification, especially when the durations of training and testing utterances are very different. To alleviate this issue, we propose the deep segment attentive embedding method to learn the unified speaker embeddings for utterances of variable duration. Each utterance is segmented by a sliding window and LSTM is used to extract the embedding of each segment. Instead of only using one local segment, we use the whole utterance to learn the utterance-level embedding by applying an attentive pooling to the embeddings of all segments. Moreover, the similarity loss of segment-level embeddings is introduced to guide the segment attention to focus on the segments with more speaker discriminations, and jointly optimized with the utterance-level embeddings loss. Systematic experiments on DiDi Speaker Dataset, Tongdun and VoxCeleb show that the proposed method significantly improves system robustness and achieves the relative EER reduction of 18.3%, 50% and 11.54%, respectively.
更多
查看译文
关键词
deep segment attentive embedding,duration robust speaker verification,deep learning based speaker verification,fixed-length local segment,utterance-level speaker embedding,average embedding,testing utterances,unified speaker embeddings,utterance-level embedding,segment-level embeddings,segment attention,speaker discriminations,utterance-level embeddings loss,DiDi Speaker Dataset,sliding window,LSTM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要