An adaptive reinforcement learning-based multimodal data fusion framework for human-robot confrontation gaming.

Neural networks : the official journal of the International Neural Network Society(2023)

引用 9|浏览3
暂无评分
摘要
Playing games between humans and robots have become a widespread human-robot confrontation (HRC) application. Although many approaches were proposed to enhance the tracking accuracy by combining different information, the problems of the intelligence degree of the robot and the anti-interference ability of the motion capture system still need to be solved. In this paper, we present an adaptive reinforcement learning (RL) based multimodal data fusion (AdaRL-MDF) framework teaching the robot hand to play Rock-Paper-Scissors (RPS) game with humans. It includes an adaptive learning mechanism to update the ensemble classifier, an RL model providing intellectual wisdom to the robot, and a multimodal data fusion structure offering resistance to interference. The corresponding experiments prove the mentioned functions of the AdaRL-MDF model. The comparison accuracy and computational time show the high performance of the ensemble model by combining k-nearest neighbor (k-NN) and deep convolutional neural network (DCNN). In addition, the depth vision-based k-NN classifier obtains a 100% identification accuracy so that the predicted gestures can be regarded as the real value. The demonstration illustrates the real possibility of HRC application. The theory involved in this model provides the possibility of developing HRC intelligence.
更多
查看译文
关键词
Reinforcement learning, Multimodal data fusion, Human-robot confrontation, Adaptive learning, Multiple sensors fusion, Hand Gesture Recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要