Utilization of Image/Force/Tactile Sensor Data for Object-Shape-Oriented Manipulation: Wiping Objects With Turning Back Motions and Occlusion

IEEE Robotics and Automation Letters(2022)

引用 4|浏览4
暂无评分
摘要
There has been an increasing demand for housework robots to handle various objects. It is, however, difficult to achieve object-shape-oriented tasks in conventional research owing to the requirement for dealing with multiple surfaces, invisible area, and occlusion; moreover, robots must perceive shapes and adjust movements even if they cannot be seen directly. Humans usually tackle questions by integrating several sensory information; inspired by this perception mechanism of humans, in this study, we considered the effective utilization of image/force/tactile data in constructing a multimodal deep neural networks (DNN) model for the shape of an object perception and motion generation. As an example, we constructed a robot to wipe around the outside of objects that are imitating light shades. The wiping motions include the moment when the hands of the robot must be away from the surface as well as the turning directions required to wipe the next surface, even though some parts of the surfaces, such as the backside or parts occluded by the arm of the robot, may not be seen directly. If DNN model uses continuous visual information, it is badly influenced by the occluded images. Hence, the best-performing DNN model is the one that uses an image of the initial time-step to approximately perceive the shape and size and then generate motions by integrating the perception and sense of tactile and force. We conclude that the effective approach to object-shape-oriented manipulation is to initially utilize image to outline the target shape and, thereafter, to use force and tactile to understand concrete features while performing tasks.
更多
查看译文
关键词
Deep learning in grasping and manipulation,perception for grasping and manipulation,recognition,sensorimotor learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要