Domain adaptation strategies for 3D reconstruction of the lumbar spine using real fluoroscopy data
CoRR(2024)
摘要
This study tackles key obstacles in adopting surgical navigation in
orthopedic surgeries, including time, cost, radiation, and workflow integration
challenges. Recently, our work X23D showed an approach for generating 3D
anatomical models of the spine from only a few intraoperative fluoroscopic
images. This negates the need for conventional registration-based surgical
navigation by creating a direct intraoperative 3D reconstruction of the
anatomy. Despite these strides, the practical application of X23D has been
limited by a domain gap between synthetic training data and real intraoperative
images.
In response, we devised a novel data collection protocol for a paired dataset
consisting of synthetic and real fluoroscopic images from the same
perspectives. Utilizing this dataset, we refined our deep learning model via
transfer learning, effectively bridging the domain gap between synthetic and
real X-ray data. A novel style transfer mechanism also allows us to convert
real X-rays to mirror the synthetic domain, enabling our in-silico-trained X23D
model to achieve high accuracy in real-world settings.
Our results demonstrated that the refined model can rapidly generate accurate
3D reconstructions of the entire lumbar spine from as few as three
intraoperative fluoroscopic shots. It achieved an 84
accuracy of our previous synthetic data-based research. Additionally, with a
computational time of only 81.1 ms, our approach provides real-time
capabilities essential for surgery integration.
Through examining ideal imaging setups and view angle dependencies, we've
further confirmed our system's practicality and dependability in clinical
settings. Our research marks a significant step forward in intraoperative 3D
reconstruction, offering enhancements to surgical planning, navigation, and
robotics.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要