Attention-based Multimodal Feature Fusion for Dance Motion Generation

Multimodal Interfaces and Machine Learning for Multimodal Interaction(2021)

引用 2|浏览13
暂无评分
摘要
ABSTRACT Recent advances in deep learning have enabled the extraction of high-level skeletal features from raw images and video sequences, paving the way for new possibilities in a variety of artificial intelligence tasks, including automatically synthesized human motion sequences. In this paper we present a system that combines 2D skeletal data and musical information to generate skeletal dancing sequences. The architecture is implemented solely with convolutional operations and trained by following a teacher-force supervised learning approach, while the synthesis of novel motion sequences follows an autoregressive process. Additionally, by employing an attention mechanism we fuse the latent representations of past music and motion information in order to condition the generation process. For assessing the system performance, we generated 900 sequences and evaluated the perceived realism, motion diversity and multimodality of the generated sequences based on various diversity metrics.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要