Safe Multiagent Learning With Soft Constrained Policy Optimization in Real Robot Control

IEEE Transactions on Industrial Informatics(2024)

引用 0|浏览1
暂无评分
摘要
Due to a lack of safety considerations, a wide range of multiagent reinforcement learning (MARL) applications are limited in real-world environments. Thus, ensuring MARL safety is essential and urgent in the domain. However, merely a few studies consider the safe MARL problem, and the investigation of real-world applications using safe MARL algorithms still needs to be improved. To fill this gap, we provide a framework with soft constrained policy optimization, in which we develop practical algorithms to address the problem in a cooperative game setting. First, the problem formulation of safe MARL is introduced. Second, the safe policy optimization of safe MARL algorithms based on soft constrained optimization is analyzed, and we further propose a safe learning framework for safe MARL. The framework can be plugged into MARL algorithms without manually fine-tuning safety bounds. Third, we investigate the sim-to-real problems, and conduct simulation and real-world experiments to evaluate the effectiveness of our algorithms. Finally, the comprehensive experimental results indicate that our method has significant benefits regarding the balance between reward and safety performance and outperforms several strong baselines.
更多
查看译文
关键词
Multiagent systems,real-robot control,safe reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要