Multi-Modal Autonomous Ultrasound Scanning for Efficient HumanCMachine Fusion Interaction

IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING(2024)

引用 0|浏览0
暂无评分
摘要
Robotic autonomous ultrasound imaging is a challenging task as robots require strong analytical capabilities to make sound decisions in complex spatial relationships. In this paper, we integrate visual and tactile information into the ultrasound robotic system drawing inspiration from the process of human doctors conducting ultrasound scans, and explore the impact of different modalities of information on our task. The proposed multimodal deep reinforcement learning (DRL) framework can integrate real-time visual feedback and tactile perception, and directly output 6D pose decisions to control the ultrasound probe, thereby achieving fully autonomous ultrasound imaging of soft, movable, and unmarked targets. We demonstrate the feasibility of our method on a simulation platform and propose an effective model transfer learning method. Subsequently, we conducted further evaluations of the approach in a real-world environment. The results indicate that our approach effectively enhances the performance of autonomous ultrasound scanning and manual adjustments further optimize the outcomes.
更多
查看译文
关键词
Ultrasonic imaging,Robots,Robot sensing systems,Task analysis,Probes,Navigation,Medical diagnostic imaging,Autonomous ultrasound scanning,deep reinforcement learning,multimodal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要