Efficient event-based robotic grasping perception using hyperdimensional computing

Internet of Things(2024)

引用 0|浏览0
暂无评分
摘要
Grasping is fundamental in various robotic applications, particularly within industrial contexts. Accurate inference of object properties is a crucial step toward enhancing grasping quality. Dynamic and Active Vision Sensors (DAVIS), increasingly utilized for robotic grasping, offer superior energy efficiency, lower latency, and higher temporal resolution than traditional cameras. However, the data they generate can be complex and noisy, necessitating substantial preprocessing. In response to these challenges, we introduce GraspHD, an innovative end-to-end algorithm that leverages brain-inspired hyperdimensional computing (HDC) to learn about the size and hardness of objects and estimate the grasping force. This novel approach circumvents the need for resource-intensive pre-processing steps, capitalizing on the simplicity and inherent parallelism of HDC operations. Our comprehensive analysis reveals that GraspHD surpasses state-of-the-art approaches in terms of overall classification accuracy. We have also implemented GraspHD on an FPGA to evaluate system efficiency. The results demonstrate that GraspHD operates at a speed 10x faster and offers an energy efficiency 26x higher than existing learning algorithms while maintaining robust performance in noisy environments. These findings underscore the significant potential of GraspHD as a more efficient and effective solution for real-time robotic grasping applications.
更多
查看译文
关键词
Artificial intelligence,Robotics,Hyperdiemnsional computing,Dynamic vision sensor,Object grasping,Neuromorphic vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要