Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria

IEEE/ACM Transactions on Audio, Speech, and Language Processing(2023)

引用 1|浏览22
暂无评分
摘要
Continuous Speech Separation (CSS) has been proposed to address speech overlaps during the analysis of realistic meeting-like conversations by eliminating any overlaps before further processing. CSS separates a recording of arbitrarily many speakers into a small number of overlap-free output channels, where each output channel may contain speech of multiple speakers. Often, a separation model is trained with Utterance-level Permutation Invariant Training (uPIT), which exclusively maps a speaker to an output channel, and applied in a sliding window approach called stitching. Recently, we introduced an alternative training scheme called Graph-PIT that teaches the separator to produce a speaker-shared output channel format without stitching. It can handle an arbitrary number of speakers as long as the number of overlapping speakers is never larger than the number of output channels. Models trained in this way are able to perform segment-less CSS, i.e., without stitching, and achieve comparable and often better separation quality than the conventional CSS with uPIT and stitching. In this contribution, we further investigate the Graph-PIT training scheme. We show in extended experiments that Graph-PIT also works in challenging reverberant conditions. We simplify the training schedule for Graph-PIT with the recently proposed Source Aggregated Signal-to-Distortion Ratio (SA-SDR) loss, which eliminates unfavorable properties of the previously used A-SDR loss to enable training with Graph-PIT from scratch. Furthermore, we introduce novel signal-level evaluation metrics for meeting scenarios, namely the source-aggregated scale- and convolution-invariant Signal-to-Distortion Ratio (SA-SI-SDR and SA-CI-SDR), which are generalizations of the commonly used SDR-based metrics for the CSS case.
更多
查看译文
关键词
Continuous speech separation,source separation,Graph-PIT,dynamic programming,permutation invariant training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要