Robot-centric Activity Recognition from First-Person RGB-D Videos

WACV(2015)

引用 37|浏览24
暂无评分
摘要
We present a framework and algorithm to analyze first person RGBD videos captured from the robot while physically interacting with humans. Specifically, we explore reactions and interactions of persons facing a mobile robot from a robot centric view. This new perspective offers social awareness to the robots, enabling interesting applications. As far as we know, there is no public 3D dataset for this problem. Therefore, we record two multi-modal first-person RGBD datasets that reflect the setting we are analyzing. We use a humanoid and a non-humanoid robot equipped with a Kinect. Notably, the videos contain a high percentage of ego-motion due to the robot self-exploration as well as its reactions to the persons' interactions. We show that separating the descriptors extracted from ego-motion and independent motion areas, and using them both, allows us to achieve superior recognition results. Experiments show that our algorithm recognizes the activities effectively and outperforms other state-of-the-art methods on related tasks.
更多
查看译文
关键词
multimodal first person rgbd datasets,nonhumanoid robot,robot self-exploration,public 3d dataset,robot-centric activity recognition,first person rgb-d videos,kinect,mobile robots,humanoid robots,robot centric view,ego-motion,mobile robot,social awareness,skeleton,histograms,robots,vectors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要