MODELING AND CONTINUOUS SONIFICATION OF AFFORDANCES FOR GESTURE-BASED INTERFACES

International Conference on Auditory Display

引用 24|浏览13
暂无评分
摘要
Sonification can play a significant role in facilitating continuous, gesture-based input in closed loop human computer interaction, where it offers the potential to improve the experience of users, making systems easier to use by rendering their inferences more transparent. The interactive system described here provides a num- ber of gestural affordances which may not be apparent to the user through a visual display or other cues, and provides novel means for navigating them with sound or vibrotactile feedback.The ap- proach combines machine learning techniques for understanding a user's gestures, with a method for the auditory display of salient features of the underlying inference process in real time. It uses a particle filter to track multiple hypotheses about a user's input as the latter is unfolding, together with Dynamic Movement Prim- itives, introduced in work by Schaal et al (1)(2), which model a user's gesture as evidence of a nonlinear dynamical system that has given rise to them. The sonification is based upon a presentation of features derived from estimates of the time varying probability that the user's gesture conforms to state trajectories through the ensemble of dynamical systems. We propose mapping constraints for the sonification of time-dependent sampled probability densi- ties. The system is being initially assessed with trial tasks such as a figure reproduction using a multi degree-of-freedom wireless pointing input device, and a handwriting interface.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要