Resource-Aware Object Classification and Segmentation for Semi-Autonomous Grasping with Prosthetic Hands

2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)(2019)

引用 11|浏览26
暂无评分
摘要
Myoelectric control of prosthetic hands relies on electromyographic (EMG) signals captured by usually two surface electrodes attached to the human body in different setups. Controlling the hand by the user requires long training and depends heavily on the robustness of the EMG signals. In this paper, we present a visual perception system to extract scene information for semi-autonomous hand-control that allows minimizing required command complexity and leads to more intuitive and effortless control. We present methods that are optimized towards minimal resource demand to derive scene information from visual data from a camera inside the hand. In particular, we show object classification and semantic segmentation of image data realized by convolutional neural networks (CNNs). We present a system architecture, that takes user feedback into account and thereby improves results. In addition, we present an evolutionary algorithm to optimize CNN architecture regarding accuracy and hardware resource demand. Our evaluation shows classification accuracy of 96.5% and segmentation accuracy of up to 89.5% on an in-hand Arm Cortex-H7 microcontroller running at only 400 MHz.
更多
查看译文
关键词
resource-aware object classification,semiautonomous grasping,prosthetic hands,myoelectric control,electromyographic signals,surface electrodes,human body,EMG signals,visual perception system,semiautonomous hand-control,visual data,semantic segmentation,image data,system architecture,hardware resource,in-hand Arm Cortex-H7 microcontroller,frequency 400.0 MHz
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要