DiFiR-CT: Distance field representation to resolve motion artifacts in computed tomography.

Medical physics(2023)

引用 0|浏览5
暂无评分
摘要
BACKGROUND:Motion during data acquisition leads to artifacts in computed tomography (CT) reconstructions. In cases such as cardiac imaging, not only is motion unavoidable, but evaluating the motion of the object is of clinical interest. Reducing motion artifacts has typically been achieved by developing systems with faster gantry rotation or via algorithms which measure and/or estimate the displacement. However, these approaches have had limited success due to both physical constraints as well as the challenge of estimating non-rigid, temporally varying, and patient-specific motion fields. PURPOSE:To develop a novel reconstruction method which generates time-resolved, artifact-free images without estimation or explicit modeling of the motion. METHODS:We describe an analysis-by-synthesis approach which progressively regresses a solution consistent with the acquired sinogram. In our method, we focus on the movement of object boundaries. Not only are the boundaries the source of image artifacts, but object boundaries can simultaneously be used to represent both the object as well as its motion over time without the need for an explicit motion model. We represent the object boundaries via a signed distance function (SDF) which can be efficiently modeled using neural networks. As a result, optimization can be performed under spatial and temporal smoothness constraints without the need for explicit motion estimation. RESULTS:We illustrate the utility of DiFiR-CT in three imaging scenarios with increasing motion complexity: translation of a small circle, heart-like change in an ellipse's diameter, and a complex topological deformation. Compared to filtered backprojection, DiFiR-CT provides high quality image reconstruction for all three motions without hyperparameter tuning or change to the architecture. We also evaluate DiFiR-CT's robustness to noise in the acquired sinogram and found its reconstruction to be accurate across a wide range of noise levels. Lastly, we demonstrate how the approach could be used for multi-intensity scenes and illustrate the importance of the initial segmentation providing a realistic initialization. Code and supplemental movies are available at https://kunalmgupta.github.io/projects/DiFiR-CT.html. CONCLUSIONS:Projection data can be used to accurately estimate a temporally-evolving scene without the need for explicit motion estimation using a neural implicit representation and analysis-by-synthesis approach.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要