Target Speaker Extraction Using Attention-Enhanced Temporal Convolutional Network

ELECTRONICS(2024)

引用 0|浏览1
暂无评分
摘要
When recording conversations, there may be multiple people talking at once. While our human ears can filter out unwanted sounds, this can be challenging for automatic speech recognition (ASR) systems, leading to reduced accuracy. To address this issue, preprocessing mechanisms such as speech separation and targeted speaker extraction are necessary to separate each person's speech. With the development of deep learning, the quality of separated speech has improved significantly. Our objective is to focus on speaker extraction, which entails implementing a primary system for speech extraction and a secondary subsystem for delivering target information. To accomplish this, we have chosen a temporal convolutional network (TCN) architecture as the foundation of our speech extraction model. A TCN enables convolutional neural networks (CNNs) to manage time series modeling, and it can be constructed in various model lengths. Furthermore, we have integrated attention enhancement into the secondary subsystem to provide the speech extraction model with comprehensive and effective target information, which helps to improve the model's ability to estimate masks. As a result, the quality of the target speaker extraction will be greatly enhanced with a more precise mask.
更多
查看译文
关键词
deep learning,target speaker extraction,temporal convolutional network (TCN),convolutional neural network (CNN),automatic speech recognition (ASR)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要