Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

MInDI-3D: Iterative Deep Learning in 3D for Sparse-view Cone Beam Computed Tomography

Created by
  • Haebom

Author

Daniel Barco (Centre for Artificial Intelligence), Marc Stadelmann (Centre for Artificial Intelligence), Martin Oswald (Centre for Artificial Intelligence), Ivo Herzig (Institute of Applied Mathematics and Physics), Lukas Lichtensteiger (Institute of Applied Mathematics and Physics), Pascal Paysan (Varian Medical Systems Imaging Lab, Baden, Switzerland), Igor Peterlik (Varian Medical Systems Imaging Lab, Baden, Switzerland) Switzerland), Michal Walczak (Varian Medical Systems Imaging Lab, Baden, Switzerland), Bjoern Menze (Biomedical Image Analysis and Machine Learning, University of Zurich, Zurich, Switzerland), Frank-Peter Schilling (Centre for Artificial Intelligence)

MInDI-3D: CBCT Image Artifact Removal Using a 3D Conditional Diffusion Model

Abstract: MInDI-3D is the first 3D conditional diffusion-based model for real-world sparse-view cone-beam computed tomography (CBCT) artifact removal, aiming to reduce radiation exposure in medical images. We extend the "InDI" concept from a 2D to a 3D volumetric approach, implementing an iterative denoising process that directly enhances CBCT volumes based on sparse-view inputs. Furthermore, we robustly train MInDI-3D on a large virtual CBCT dataset (16,182 scans) generated from chest CT volumes in the CT-RATE public dataset. We performed a comprehensive evaluation, including quantitative metrics, scalability analysis, generalization tests, and clinical evaluations by 11 clinicians. MInDI-3D achieved a PSNR gain of 12.96 (6.10) dB compared to uncorrected scans using only 50 projections on the CT-RATE virtual CBCT (independent real-world) test set, demonstrating an 8-fold reduction in image radiation exposure. Scalability was demonstrated by demonstrating improved performance with more training data. Specifically, MInDI-3D matches the performance of 3D U-Net on real-world scans of 16 cancer patients in terms of distortion and task-based metrics. It also generalizes well to new CBCT scanner geometries. Clinicians evaluated the model as sufficient for patient positioning across all anatomical regions and assessed its ability to preserve lung tumor boundaries well.

Takeaways, Limitations

Takeaways:
The first 3D conditional diffusion-based model for sparse-view CBCT artifact removal.
It can reduce radiation exposure by up to 8 times.
It shows similar performance to 3D U-Net.
Generalizable to new CBCT scanner geometries.
Demonstrated clinical utility (patient positioning and tumor margin preservation).
Limitations:
There is no Limitations specified in the paper. (However, other Limitations may exist in further research or in real clinical settings.)
👍