User Modelling Using Multimodal Information for Personalised Dressing Assistance.

IEEE ACCESS(2020)

引用 6|浏览64
暂无评分
摘要
Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing task that involves physical contact with a human's upper body, in which the goal is to improve the comfort level of the individual. Two aspects are considered to be significant in improving a user's comfort level: having more natural postures and exerting less effort. However, a dressing path that fulfils these two criteria may not be found at one time. Therefore, we propose a user modelling method that combines vision and force data to enable the robot to search for an optimised dressing path for each user and improve as the human-robot interaction progresses. We compare the proposed method against two single-modality state-of-the-art user modelling methods designed for personalised assistive dressing by user studies (31 subjects). Experimental results show that the proposed method provides personalised assistance that results in more natural postures and less effort for human users.
更多
查看译文
关键词
Robots,Adaptation models,Force,Data models,Task analysis,Human-robot interaction,Hidden Markov models,Multimodal user modelling,assistive dressing,vision and force fusion,human-robot interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要