Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation

TACCESS(2011)

引用 8|浏览6
暂无评分
摘要
We are studying techniques for producing realistic and understandable animations of American Sign Language (ASL); such animations have accessibility benefits for signers with lower levels of written language literacy. This article describes and evaluates a novel method for modeling and synthesizing ASL animations based on samples of ASL signs collected from native signers. We apply this technique to ASL inflecting verbs, common signs in which the location and orientation of the hands is influenced by the arrangement of locations in 3D space that represent entities under discussion. We train mathematical models of hand movement on animation data of signs produced by a native signer. In evaluation studies with native ASL signers, the verb animations synthesized from our model had similar subjective-rating and comprehension-question scores to animations produced by a human animator; they also achieved higher scores than baseline animations. Further, we examine a split modeling technique for accommodating certain verb signs with complex movement patterns, and we conduct an analysis of how robust our modeling techniques are to reductions in the size of their training data. The modeling techniques in this article are applicable to other types of ASL signs and to other sign languages used internationally. Our models’ parameterization of sign animations can increase the repertoire of generation systems and can partially automate the work of humans using sign language scripting systems.
更多
查看译文
关键词
common sign,sign animation,synthesizing ASL,ASL sign,native signer,Spatially Inflected Verbs,American Sign Language Animation,Data-Driven Synthesis,native ASL signer,sign language,split modeling technique,modeling technique,certain verb sign
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要