Crowdsourced Facial Expression Mapping Using a 3D Avatar.

CHI Extended Abstracts(2016)

引用 23|浏览10
暂无评分
摘要
Facial expression mapping is the process of attributing signal values to a particular set of muscle activations in the face. This paper proposes the development of a broad lexicon of quantifiable, reproducible facial expressions with known signal values using an expressive 3D model and crowdsourced labeling data. Traditionally, coding muscle movements in the face is a time-consuming manual process performed by specialists. Identifying the communicative content of an expression generally requires generating large sets of posed photographs, with identifying labels chosen from a circumscribed list. Consequently, the widely accepted collection of configurations with known meanings is limited to six basic expressions of emotion. Our approach defines mappings from parameterized facial expressions displayed by a 3D avatar to their semantic representations. By collecting large, free-response label sets from naïve raters and using natural language processing techniques, we converge on a semantic centroid, or single label quickly and with low overhead.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要