Towards A Synchronised Grammars Framework For Adaptive Musical Human-Robot Collaboration

2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN)(2015)

引用 26|浏览16
暂无评分
摘要
We present an adaptive musical collaboration framework for interaction between a human and a robot. The aim of our work is to develop a system that receives feedback from the user in real time and learns the music progression style of the user over time. To tackle this problem, we represent a song as a hierarchically structured sequence of music primitives. By exploiting the sequential constraints of these primitives inferred from the structural information combined with user feedback, we show that a robot can play music in accordance with the user's anticipated actions. We use Stochastic Context-Free Grammars augmented with the knowledge of the learnt user's preferences.We provide synthetic experiments as well as a pilot study with a Baxter robot and a tangible music table. The synthetic results show the synchronisation and adaptivity features of our framework and the pilot study suggest these are applicable to create an effective musical collaboration experience.
更多
查看译文
关键词
adaptive musical collaboration framework,human-robot interaction,music progression style,hierarchically structured sequence,music primitives,sequential constraints,structural information,user feedback,user anticipated actions,stochastic context-free grammars,user preferences,Baxter robot,tangible music table,synchronisation,adaptivity features,musical collaboration experience
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要