Force-EvT: A Closer Look at Robotic Gripper Force Measurement with Event-based Vision Transformer
arxiv(2024)
摘要
Robotic grippers are receiving increasing attention in various industries as
essential components of robots for interacting and manipulating objects. While
significant progress has been made in the past, conventional rigid grippers
still have limitations in handling irregular objects and can damage fragile
objects. We have shown that soft grippers offer deformability to adapt to a
variety of object shapes and maximize object protection. At the same time,
dynamic vision sensors (e.g., event-based cameras) are capable of capturing
small changes in brightness and streaming them asynchronously as events, unlike
RGB cameras, which do not perform well in low-light and fast-moving
environments. In this paper, a dynamic-vision-based algorithm is proposed to
measure the force applied to the gripper. In particular, we first set up a
DVXplorer Lite series event camera to capture twenty-five sets of event data.
Second, motivated by the impressive performance of the Vision Transformer (ViT)
algorithm in dense image prediction tasks, we propose a new approach that
demonstrates the potential for real-time force estimation and meets the
requirements of real-world scenarios. We extensively evaluate the proposed
algorithm on a wide range of scenarios and settings, and show that it
consistently outperforms recent approaches.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要