Hierarchical multi-agent reinforcement learning for multi-aircraft close-range air combat

IET CONTROL THEORY AND APPLICATIONS

引用 2|浏览1
暂无评分
摘要
The close-range autonomous air combat has gained significant attention from researchers involved in applications related to artificial intelligence (AI). A majority of the previous studies on autonomous air combat were focused on one-on-one air combat scenarios, however, the modern air combat is mostly conducted in formations. With regard to the aforementioned factors, a novel hierarchical maneuvering control architecture is introduced that is applied to the multi-aircraft close-range air combat scenario, which can handle air combat scenarios with variable-size formation. Subsequently, three air combat sub-tasks are designed, and recurrent soft actor-critic (RSAC) algorithm combined with competitive self-play (SP) is incorporated to learn the sub-strategies. A novel hierarchical multi-agent reinforcement learning (HMARL) algorithm is proposed to obtain the high-level strategy for target and sub-strategy selection. The training performance of the training algorithm of sub-strategies and high-level strategy in different air combat scenarios is evaluated. The obtained strategies are analyzed and it is found that the formations exhibit effective cooperative behavior in symmetric and asymmetric scenarios. Finally, the ideas of engineering implementation of the maneuvering control architecture are given. The study provides a solution for future multi-aircraft autonomous air combat.
更多
查看译文
关键词
autonomous air combat,artificial intelligence,competitive self-play,maneuver decision-making,multi-agent reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要