Crowdsourcing Experiment and Fully Convolutional Neural Networks for Coastal Remote Sensing of Seagrass and Macroalgae

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing(2023)

引用 0|浏览2
暂无评分
摘要
Recently, convolutional neural networks and fully convolutional neural networks (FCNs) have been successfully used for monitoring coastal marine ecosystems, in particular vegetation. However, even with recent advances in computational modeling and data acquisition, deep learning models require substantial amounts of good quality reference data to effectively self-learn internal representations of input imagery. The classical approach for coastal mapping requires experts to transcribe in situ records and delineate polygons from high-resolution imagery such that FCNs can self-learn. However, labeling by a single individual limits the training data, whereas crowdsourcing labels can increase the volume of training data, but may compromise label quality and consistency. In this article, we assessed the reliability of crowdsourced labels on a complex multiclass problem domain for estuarine vegetation and unvegetated sediment. An interobserver variability experiment was conducted in order to assess the statistical differences in crowdsourced annotations for plant species and sediment. The participants were grouped based on their discipline and level of expertise, and the statistical differences were evaluated using Cochran's Q-test and the annotation accuracy of each group to determine observation biases. Given the crowdsourced labels, FCNs were trained with majority-vote annotations from each group to check whether observation biases were propagated to FCN performance. Two scenarios were examined: first, a direct comparison of FCNs trained with transcribed in situ labels and crowdsourced labels from each group was established. Then, transcribed in situ labels were supplemented with crowdsourced labels to investigate the feasibility of training FCNs with crowdsourced labels in coastal mapping applications. We show that annotations sourced from discipline experts (ecologists and geomorphologists) familiar with the study site were more accurate than experts with no prior knowledge of the site and nonexperts, with our results confirming that biases in participant annotation were propagated in FCN performance. Furthermore, FCNs trained with a combined dataset of in situ and crowdsourced labels performed better than FCNs trained on the same imagery with in situ labels.
更多
查看译文
关键词
Convolutional neural network (CNN),crowdsourcing,deep learning (DL),multispectral,remote sensing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要