On the Guaranteed Almost Equivalence Between Imitation Learning From Observation and Demonstration
IEEE Transactions on Neural Networks and Learning Systems(2023)
摘要
Imitation learning from observation (LfO) is more preferable than imitation learning from demonstration (LfD) because of the nonnecessity of expert actions when reconstructing the expert policy from the expert data. However, previous studies imply that the performance of LfO is inferior to LfD by a tremendous gap, which makes it challenging to employ LfO in practice. By contrast, this article proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness. In the deterministic robot environment, from the perspective of the control theory, we show that the inverse dynamics disagreement between LfO and LfD approaches zero, meaning that LfO is almost equivalent to LfD. To further relax the deterministic constraint and better adapt to the practical environment, we consider bounded randomness in the robot environment and prove that the optimizing targets for both LfD and LfO remain almost the same in the more generalized setting. Extensive experiments for multiple robot tasks are conducted to demonstrate that LfO achieves comparable performance to LfD empirically. In fact, the most common robot systems in reality are the robot environment with bounded randomness (i.e., the environment this article considered). Hence, our findings greatly extend the potential of LfO and suggest that we can safely apply LfO in practice without sacrificing the performance compared to LfD.
更多查看译文
关键词
Robots,Task analysis,Trajectory,Cloning,Mathematical model,Heuristic algorithms,Control theory,Generative adversarial imitation learning (GAIL),imitation learning (IL),learning from demonstration (LfD),learning from observation (LfO),reinforcement learning (RL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络