Unsupervised MRI motion artifact disentanglement: introducing MAUDGAN.

Physics in medicine and biology(2024)

引用 0|浏览3
暂无评分
摘要
OBJECTIVE:This study developed an unsupervised motion artifact reduction method for MRI images of patients with brain tumors. The proposed novel design uses multi-parametric multicenter contrast-enhanced T1W (ceT1W) and T2-FLAIR MRI images. Approach: The proposed framework included two generators, two discriminators, and two feature extractor networks. A 3-fold cross-validation was used to train and fine-tune the hyperparameters of the proposed model using 230 brain MRI images with tumors, which were then tested on 148 patients' in-vivo datasets. An ablation was performed to evaluate the model's compartments. Our model was compared with Pix2pix and CycleGAN. Six evaluation metrics were reported, including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale-SSIM (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). Artifact reduction and consistency of tumor regions, image contrast, and sharpness were evaluated by three evaluators using Likert scales and compared with ANOVA and Tukey's HSD tests. Main results: On average, our method outperforms comparative models to remove heavy motion artifacts with the lowest NMSE (18.34±5.07%) and MS-GMSD (0.07±0.03) for heavy motion artifact level. Additionally, our method creates motion-free images with the highest SSIM (0.93±0.04), PSNR (30.63±4.96), and VIF (0.45±0.05) values, along with comparable MS-SSIM (0.96±0.31). Similarly, our method outperformed comparative models in removing in-vivo motion artifacts for different distortion levels except for MS- SSIM and VIF, which have comparable performance with CycleGAN. Moreover, our method had a consistent performance for different artifact levels. For the heavy level of motion artifacts, our method got the highest Likert scores of 2.82±0.52, 1.88±0.71, and 1.02±0.14 (p-values<<0.0001) for our method, CycleGAN, and Pix2pix respectively. Similar trends were also found for other motion artifact levels. Significance: Our proposed unsupervised method was demonstrated to reduce motion artifacts from the ceT1W brain images under a multi-parametric framework.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要