Second Language Pronunciation Training by Ultrasound-enhanced Visual Augmented Reality.

BIBM(2021)

引用 1|浏览2
暂无评分
摘要
Evaluation of ultrasound-enhanced pronunciation training methods has shown that visualizing articulator’s system as biofeedback to language learners will significantly improve articulation learning efficiency. In recent studies, electronic visual feedback (EVF) systems such as ultrasound technology have been effectively employed in a range of teaching and learning contexts. The ultrasound-enhanced multimodal systems now can visualize the tongue movements during speech superimposed on the user’s face. It has been integrated for several language courses via a blended learning paradigm at the university level. However, these techniques provide offline videos and language learners cannot observe their tongue in real-time. Furthermore, the position of the user’s head should be fixed in all previous systems. This article proposes a novel ultrasound-enhanced multimodal pronunciation training system that benefits from powerful artificial intelligence techniques. The main objective of this research is to combine ultrasound technology and artificial intelligence to tackle the difficulties of previous systems for automatic augmented visualization of tongue movements in real-time. In our proposed system, a user’s head can be moved freely, making language training easier. Our preliminary pedagogical evaluation of the proposed system revealed significant user flexibility, system interactivity, and autonomy improvements.
更多
查看译文
关键词
Ultrasound technology,Speech production,Real-time visualization,Machine learning,Tongue contour extraction,Ultrasound probe tracking,Automatic image segmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要