Learning-Based UAV Path Planning for Data Collection With Integrated Collision Avoidance

IEEE Internet of Things Journal(2022)

Cited 35|Views58
No score
Abstract
Unmanned aerial vehicles (UAVs) are expected to be an integral part of wireless networks, and determining collision-free trajectory in multi-UAV noncooperative scenarios while collecting data from distributed Internet of Things (IoT) nodes is a challenging task. In this article, we consider a path-planning optimization problem to maximize the collected data from multiple IoT nodes under realistic constraints. The considered multi-UAV noncooperative scenarios involve a random number of other UAVs in addition to the typical UAV, and UAVs do not communicate or share information among each other. We translate the problem into a Markov decision process (MDP) with parameterized states, permissible actions, and detailed reward functions. Dueling double deep $Q$ -network (D3QN) is proposed to learn the decision-making policy for the typical UAV, without any prior knowledge of the environment (e.g., channel propagation model and locations of the obstacles) and other UAVs (e.g., their missions, movements, and policies). The proposed algorithm can adapt to various missions in various scenarios, e.g., different numbers and positions of IoT nodes, different amount of data to be collected, and different numbers and positions of other UAVs. Numerical results demonstrate that real-time navigation can be efficiently performed with high success rate, high data collection rate, and low collision rate.
More
Translated text
Key words
Collision avoidance,data collection,deep reinforcement learning (RL),multiunmanned aerial vehicle (UAV) scenarios,path planning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined