Proximal policy optimization guidance algorithm for intercepting near-space maneuvering targets

Aerospace Science and Technology(2023)

引用 3|浏览18
暂无评分
摘要
This paper studies a novel guidance framework of the vehicle against a high-speed and maneuvering target based on deep reinforcement learning (DRL) considering the energy consumption, autopilot lag dynamics, and input saturation, which can effectively cope with the high flight-path angle error flight phase and various uncertainties. The guidance framework proposes an end-to-end mapping transformation between the guidance command and observation states consisting of line-of-sight (LOS) angle, relative distance, and their rate measured by the seeker. At the same time, the observability of the LOS angle and relative distance is included in constructing the reward function. Besides, the relative engagement kinematic model of the interceptor-target is established and combined with the PPO guidance algorithm, jointly described as a Markov decision process (MDP). Notably, the guidance framework is optimized using the improved proximal policy optimization (PPO) algorithm and demonstrated in a simulated terminal phase in the near-space. Specifically, the PPO guidance algorithm is structured by the policy (actor) neural network and the critic neural network, and both are standard fully-connected neural networks. Subsequently, observation states and rewards are fully collected and applied by introducing the experience replay method. Also, the exponential decay learning rate method, mini-batch stochastic gradient ascent (SGA) method, zero-score standardization, and Adam optimizer are proposed to train the reinforcement learning algorithm more efficiently. Moreover, the proposed guidance framework has an excellent generalization capability and guarantees the implementation of fixed and stochastic engagement scenarios, which means that the interceptor can realize the unlearned practical combat scenarios. The robust capacity is indicated and validated using Monte Carlo simulation under various uncertainties. Moreover, the DRL guidance framework can satisfy the onboard application requirement.
更多
查看译文
关键词
Deep reinforcement learning (DRL),Proximal policy optimization (PPO),Markov decision process (MDP),Near-space interception,Terminal guidance,Maneuvering targets
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要