Robotic Test Tube Rearrangement Using Combined Reinforcement Learning and Motion Planning
CoRR(2024)
摘要
A combined task-level reinforcement learning and motion planning framework is
proposed in this paper to address a multi-class in-rack test tube rearrangement
problem. At the task level, the framework uses reinforcement learning to infer
a sequence of swap actions while ignoring robotic motion details. At the motion
level, the framework accepts the swapping action sequences inferred by
task-level agents and plans the detailed robotic pick-and-place motion. The
task and motion-level planning form a closed loop with the help of a condition
set maintained for each rack slot, which allows the framework to perform
replanning and effectively find solutions in the presence of low-level
failures. Particularly for reinforcement learning, the framework leverages a
distributed deep Q-learning structure with the Dueling Double Deep Q Network
(D3QN) to acquire near-optimal policies and uses an A^⋆-based
post-processing technique to amplify the collected training data. The D3QN and
distributed learning help increase training efficiency. The post-processing
helps complete unfinished action sequences and remove redundancy, thus making
the training data more effective. We carry out both simulations and real-world
studies to understand the performance of the proposed framework. The results
verify the performance of the RL and post-processing and show that the
closed-loop combination improves robustness. The framework is ready to
incorporate various sensory feedback. The real-world studies also demonstrated
the incorporation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要