Augmenting EMBR Virtual Human Animation System with MPEG-4 Controls for Producing ASL Facial Expressions

The Fifth International Workshop on Sign Language Translation and Avatar Technology (SLTAT)(2015)

引用 6|浏览3
暂无评分
摘要
Our laboratory is investigating technology for automating the synthesis of animations of American Sign Language (ASL) that are linguistically accurate and support comprehension of information content. A major goal of this research is to make it easier for companies or organizations to add ASL content to websites and media. Currently, website owners must generally use videos of humans if they wish to provide ASL content, but videos are expensive to update when information must be modified. Further, the message cannot be generated automatically based on a user-query, which is needed for some applications. Having the ability to generate animations semi-automatically, from a script representation of sign-language sentence glosses, could increase information accessibility for many people who are deaf by making it more likely that sign language content would be provided online. Further, synthesis technology is an important final step in producing animations from the output of sign language machine translation systems, eg [1].Synthesis software must make many choices when converting a plan for an ASL sentence into a final animation, including details of speed, timing, and transitional movements between signs. Specifically, in recent work, our laboratory has investigated the synthesis of syntactic ASL facial expressions, which co-occur with the signs performed on the hands. These types of facial expressions are used to convey whether a sentence: is a question, is negated in meaning, has a topic phrase at the beginning, etc. In fact, linguists have described how a sequence of signs performed on the hands can have different meanings …
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要