Adversarial Learning Improves Vision-Based Perception from Drones with Imbalanced Datasets

JOURNAL OF AEROSPACE INFORMATION SYSTEMS(2023)

引用 0|浏览10
暂无评分
摘要
This work proposes a vision-based perception algorithm that combines image-processing-based detection and tracking of aerial objects with convolutional neural networks (CNNs) integrated for classification of general aviation aircraft, multirotor small uncrewed aerial systems (SUAS), fixed-wing SUAS, and birds to enable improved onboard avoidance algorithm decision making. Furthermore, we integrate adversarial learning during the training of the CNNs and evaluate performance with class balanced and imbalanced datasets because this maximizes the utility of resource-expensive flight experiments to collect aviation datasets. We compare our proposed CNN with adversarial learning (CNN+ADVL) model with a state-of-the-art CNN as well as a you only look once (YOLO, v4) model retrained (YOLO v4 aircraft) on the same data. The CNN+ADVL trained on the imbalanced dataset achieves the highest 10-fold cross-validation classification accuracy of 76.2% for aircraft and birds for all ranges while achieving 87.0% aircraft classification accuracy, meeting proposed self-assurance separation distances derived from Federal Aviation Administration (FAA) guidelines. In comparison, the CNNs achieved 74.4% 10-fold cross-validation classification accuracy for aircraft and birds as well as 83.4% accuracy for the aircraft, meeting proposed self-assurance separation distances derived from FAA guidelines. Furthermore, we demonstrate that the integration of adversarial learning improves the classification performance for the perception of aerial objects using a class imbalanced dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要