An improved multi-modal joint segmentation and registration model based on Bhattacharyya distance measure

ALEXANDRIA ENGINEERING JOURNAL(2022)

引用 5|浏览0
暂无评分
摘要
Joint image segmentation and registration of multi-modality images is a crucial step in the field of image prepossessing. The sensitivity of joint segmentation and registration models to noise is a significant challenge. During the registration process of multi-modal images, the similarity measure plays a vital role in measuring the results as a standard. Accordingly, an improved joint model for registering and segmenting multi-modality images is proposed, by utilising the Bhattacharyya distance measure to achieve improved noise robustness of the proposed model as compared to the existing model using the mutual information metric. The proposed model is applied to various medical and synthetic noisy images of multiple modalities. Moreover, the dataset images used in this study have been obtained from well-known, freely available BRATS 2015 and CHAOS datasets, where the proposed model produces satisfactory results as compared to the existing model. Experimental results show that the proposed model outperforms the existing model in terms of the Bhattacharyya distance measure in noisy images. Statistical analysis and comparison are performed through the relative reduction of the new distance measure, Dice similarity coefficient, Jaccard similarity coefficient and Hausdorff distance. (C) 2022 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Alexandria University
更多
查看译文
关键词
Image registration, Image segmentation, Linear curvature (LC), Bhattacharyya (BC) distance, Mutual information (MI), Dice similarity coefficient (DSC), Hausdorff distance (HD), Jaccard similarity coefficient (JSC)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要