Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data.

SLPAT '12: Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies(2012)

引用 6|浏览12
暂无评分
摘要
American Sign Language (ASL) synthesis software can improve the accessibility of information and services for deaf individuals with low English literacy. The synthesis component of current ASL animation generation and scripting systems have limited handling of the many ASL verb signs whose movement path is inflected to indicate 3D locations in the signing space associated with discourse referents. Using motion-capture data recorded from human signers, we model how the motion-paths of verb signs vary based on the location of their subject and object. This model yields a lexicon for ASL verb signs that is parameterized on the 3D locations of the verb's arguments; such a lexicon enables more realistic and understandable ASL animations. A new model presented in this paper, based on identifying the principal movement vector of the hands, shows improvement in modeling ASL verb signs, including when trained on movement data from a different human signer.
更多
查看译文
关键词
ASL verb sign,current ASL animation generation,understandable ASL animation,verb sign,model yield,movement data,movement path,new model,principal movement vector,different human signer,American Sign Language,inflecting verb,motion-capture data,vector-based model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要