Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images.

MEDICAL PHYSICS(2020)

引用 45|浏览41
暂无评分
摘要
Purpose Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures. Methods Our proposed method for 3D segmentation involved prediction on two-dimensional (2D) slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A 2D U-Net was modified, trained, and validated using images from 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U-Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end-fire and 20 side-fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V-Net, Dense V-Net, and High-resolution 3D-Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons [dice similarity coefficient (DSC), recall, and precision], volume percent differences (VPD), mean surface distance (MSD), and Hausdorff distance (HD), to assess 3D segmentation accuracy. Results Overall, our proposed reconstructed modified U-Net performed with a median [first quartile, third quartile] absolute DSC, recall, precision, VPD, MSD, and HD of 94.1 [92.6, 94.9]%, 96.0 [93.1, 98.5]%, 93.2 [88.8, 95.4]%, 5.78 [2.49, 11.50]%, 0.89 [0.73, 1.09] mm, and 2.89 [2.37, 4.35] mm, respectively. When compared to the best-performing optimized 3D network (i.e., 3D V-Net with a Dice plus cross-entropy loss function), our proposed method performed with a significant improvement across nearly all metrics. A computation time <0.7 s per prostate was observed, which is a sufficiently short segmentation time for intraoperative implementation. Conclusions Our proposed algorithm was able to provide a fast and accurate 3D segmentation across variable 3D TRUS prostate images, enabling a generalizable intraoperative solution for needle-based prostate cancer procedures. This method has the potential to decrease procedure times, supporting the increasing interest in needle-based 3D TRUS approaches.
更多
查看译文
关键词
3D ultrasound prostate segmentation,biopsy,brachytherapy,deep learning,prostate cancer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要