An Automatic Key-points Detection and Style Transfer based Method of Articulatory Animation Generation.

ICCAI(2023)

引用 0|浏览8
暂无评分
摘要
The method of generating articulatory animation based on automatic key point detection and style transfer. This paper proposes a system of key-points detection and registration based on image feature space, and uses the GAN for motion transfer, which is used to automatically generate oral and tongue articulatory animation of English. First, by obtaining the standard articulation image of the International Phonetic Alphabet from the public dataset, we train a deep-learning motion model according to the characteristics of the tongue, upper jaw, lower jaw, small tongue, soft palate and other motion features, according to the characteristics of articulatory organ, and automatically detect the key-points in the image feature space. According to the existing MRI articulatory videos, using different styles of articulatory images, a generative model is used to generate end-to-end animation of different styles automatically. For different combinations of consonants and vowels, generate animation of different syllables, and according to different categories of articulatory, such as stress and light tone, adjust the duration of articulatory to form different animation of stress and light tone of the same syllable. And form the animation of words through syllables, and then form the animation of each sentence. Experiments show that this method can simulate realistic articulatory animation according to input phonetic symbols, and has a very positive effect on English articulatory correction.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要