AnatomyNet: Deep 3D Squeeze-and-excitation U-Nets for fast and fully automated whole-volume anatomical segmentation.

bioRxiv(2018)

引用 35|浏览39
暂无评分
摘要
Purpose: Radiation therapy (RT) is a common treatment for head and neck (HaN) cancer where therapists are often required to manually delineate boundaries of the organs-at-risks (OARs). The radiation therapy planning is time-consuming as each computed tomography (CT) volumetric data set typically consists of hundreds to thousands of slices and needs to be individually inspected. Automated head and neck anatomical segmentation provides a way to speed up and improve the reproducibility of radiation therapy planning. Previous work on anatomical segmentation is primarily based on atlas registrations, which takes up to hours for one patient and requires sophisticated atlas creation. In this work, we propose the AnatomyNet, an end-to-end and atlas-free three dimensional squeeze-and-excitation U-Net (3D SE U-Net), for fast and fully automated whole-volume HaN anatomical segmentation. Methods: There are two main challenges for fully automated HaN OARs segmentation: 1) challenge in segmenting small anatomies (i.e., optic chiasm and optic nerves) occupying only a few slices, and 2) training model with inconsistent data annotations with missing ground truth for some anatomical structures because of different RT planning. We propose the AnatomyNet that has one down-sampling layer with the trade-off between GPU memory and feature representation capacity, and 3D SE residual blocks for effective feature learning to alleviate these challenges. Moreover, we design a hybrid loss function with the Dice loss and the focal loss. The Dice loss is a class level distribution loss that depends less on the number of voxels in the anatomy, and the focal loss is designed to deal with highly unbalanced segmentation. For missing annotations, we propose masked loss and weighted loss for accurate and balanced weights updating in the learning of the AnatomyNet. Results: We collect 261 HaN CT images to train the AnatomyNet, and use MICCAI Head and Neck Auto Segmentation Challenge 2015 as the benchmark dataset to evaluate the performance of the AnatomyNet. The objective is to segment nine anatomies: brain stem, chiasm, mandible, optic nerve left, optic nerve right, parotid gland left, parotid gland right, submandibular gland left, and submandibular gland right. Compared to previous state-of-the-art methods for each anatomy from the MICCAI 2015 competition, the AnatomyNet increases Dice similarity coefficient (DSC) by 3.3% on average. The proposed AnatomyNet takes only 0.12 seconds on average to segment a whole-volume HaN CT image of an average dimension of 178 x 302 x 225. All the data and code will be available in https://github.com/wentaozhu/AnatomyNet-for-anatomical-segmentation.git.Conclusion: We propose an end-to-end, fast and fully automated deep convolutional network, AnatomyNet, for accurate and whole-volume HaN anatomical segmentation. The proposed AnatomyNet outperforms previous state-of-the-art methods on the benchmark dataset. Extensive experiments demonstrate the effectiveness and good generalization ability of the components in the AnatomyNet.Key words: Fast and fully automated anatomical segmentation, 3D squeeze-and-excitation U-Net (3D SE U-Net), radiation therapy, head and neck organ segmentation
更多
查看译文
关键词
Fast and fully automated anatomical segmentation,3D squeeze-and-excitation U-Net (3D SE U-Net),radiation therapy,head and neck organ segmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要