AnnoTheia: A Semi-Automatic Annotation Toolkit for Audio-Visual Speech Technologies
CoRR(2024)
摘要
More than 7,000 known languages are spoken around the world. However, due to
the lack of annotated resources, only a small fraction of them are currently
covered by speech technologies. Albeit self-supervised speech representations,
recent massive speech corpora collections, as well as the organization of
challenges, have alleviated this inequality, most studies are mainly
benchmarked on English. This situation is aggravated when tasks involving both
acoustic and visual speech modalities are addressed. In order to promote
research on low-resource languages for audio-visual speech technologies, we
present AnnoTheia, a semi-automatic annotation toolkit that detects when a
person speaks on the scene and the corresponding transcription. In addition, to
show the complete process of preparing AnnoTheia for a language of interest, we
also describe the adaptation of a pre-trained model for active speaker
detection to Spanish, using a database not initially conceived for this type of
task. The AnnoTheia toolkit, tutorials, and pre-trained models are available on
GitHub.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要