Vocalization for Emotional Communication in Crossmodal Affective Display

Pranavi Jalapati, Selwa Sweidan,Xin Zhu,Heather Culbertson

2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS, ACIIW(2023)

引用 0|浏览2
暂无评分
摘要
This paper presents our design of a crossmodal vocalization-haptic system to allow users to communicate emotions to a partner or pair of users. We explore affective context as a combination of user relationships (specifically the closeness between pairs of users) and user culture. We share the design and implementation of the crossmodal system that takes up to ten seconds of vocal expression (including humming or singing) from one user and transposes it into haptic signals to be displayed to twelve vibration actuators worn on the forearm of the second user. Our method of transposing musical vocal inputs captures the key signal features of rhythm, amplitude, time, and frequency. We present the results from a human subject study (N=20) involving 10 pairs of users with varying levels of closeness (ranging from siblings, friends, and strangers) to understand how our system supports affective communication. Our results show that low-level and rhythm audio parameters most strongly influence affective responses in our users. Additionally, the low-level vocal features are influenced by user demographics and the closeness between the pairs of users. The results suggest the impact of user closeness on affective communication and provide insights into the optimal music transposition methods best suited for affective communication.
更多
查看译文
关键词
musical haptics,crossmodal,affective communication,user personalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要