Comparing the Accuracy of sUAS Navigation, Image Co-Registration and CNN-Based Damage Detection between Traditional and Repeat Station Imaging

Andrew C. Loerch,Douglas A. Stow,Lloyd L. Coulter,Atsushi Nara, James Frew

Geosciences(2022)

引用 0|浏览4
暂无评分
摘要
The application of ultra-high spatial resolution imagery from small unpiloted aerial systems (sUAS) can provide valuable information about the status of built infrastructure following natural disasters. This study employs three methods for improving the value of sUAS imagery: (1) repeating the positioning of image stations over time using a bi-temporal imaging approach called repeat station imaging (RSI) (compared here against traditional (non-RSI) imaging), (2) co-registration of bi-temporal image pairs, and (3) damage detection using Mask R-CNN, a convolutional neural network (CNN) algorithm applied to co-registered image pairs. Infrastructure features included roads, buildings, and bridges, with simulated cracks representing damage. The accuracies of platform navigation and camera station positioning, image co-registration, and resultant Mask R-CNN damage detection were assessed for image pairs, derived with RSI and non-RSI acquisition. In all cases, the RSI approach yielded the highest accuracies, with repeated sUAS navigation accuracy within 0.16 m mean absolute error (MAE) horizontally and vertically, image co-registration accuracy of 2.2 pixels MAE, and damage detection accuracy of 83.7% mean intersection over union.
更多
查看译文
关键词
post-hazard,damage detection,machine learning,UAS
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要