Application of Artificial Intelligence and Deep Learning for Choroid Segmentation in Myopia
Translational vision science & technology(2022)SCI 3区
Taichung Vet Gen Hosp
Abstract
Purpose: To investigate the correlation between choroidal thickness and myopia progression using a deep learning method. Methods: Two data sets, data set A and data set B, comprising of 123 optical coherence tomography (OCT) volumes, were collected to establish the model and verify its clinical utility. The proposed mask region-based convolutional neural network (R-CNN) model, trained with the pretrained weights from the Common Objects in Context database as well as the manually labeled OCT images from data set A, was used to automatically segment the choroid. To verify its clinical utility, the mask R-CNN model was tested with data set B, and the choroidal thickness estimated by the model was also used to explore its relationship with myopia. Results: Compared with the result of manual segmentation in data set B, the error of the automatic choroidal inner and outer boundary segmentation was 6.72 +/- 2.12 and 13.75 +/- 7.57 mu m, respectively. The mean dice coefficient between the region segmented by automatic and manual methods was 93.87% +/- 2.89%. The mean difference in choroidal thickness over the Early Treatment Diabetic Retinopathy Study zone between the two methods was 10.52 mu m. Additionally, the choroidal thickness estimated using the proposed model was thinner in high-myopic eyes, and axial length was the most significant predictor. Conclusions: The mask R-CNN model has excellent performance in choroidal segmentation and quantification. In addition, the choroid of high myopia is significantly thinner than that of nonhigh myopia. Translational Relevance: This work lays the foundations for mask R-CNN models that could aid in the evaluation of more intricate changes occurring in chorioretinal diseases.
MoreTranslated text
Key words
artificial intelligence,deep learning,choroidal thickness,myopia,mask R-CNN
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2013
被引用153 | 浏览
2012
被引用686 | 浏览
2012
被引用85 | 浏览
2012
被引用295 | 浏览
2013
被引用56 | 浏览
2013
被引用184 | 浏览
1980
被引用10167 | 浏览
2013
被引用63 | 浏览
2016
被引用577 | 浏览
2017
被引用123 | 浏览
2019
被引用100 | 浏览
2020
被引用13 | 浏览
2020
被引用33 | 浏览
2020
被引用47 | 浏览
2021
被引用5 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest