Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Towards Cardiac MRI Foundation Models: Comprehensive Visual-Tabular Representations for Whole-Heart Assessment and Beyond

Created by
  • Haebom

Author

Yundi Zhang, Paul Hager, Che Liu, Suprosanna Shit, Chen Chen, Daniel Rueckert, Jiazhen Pan

Outline

This paper presents the ViTa model, which integrates cardiac magnetic resonance imaging (CMR) with patient-level health factors to enable a comprehensive understanding of cardiac health and personalized disease risk interpretation. Leveraging data from 42,000 UK Biobank participants, we integrate 3D+T cine stack image data in short-axis and long-axis views with detailed tabular patient-level factors. This multimodal paradigm supports multiple subtasks, including cardiac phenotype and physiological feature prediction, segmentation, and cardiac and metabolic disease classification, within a single, integrated framework. By learning a shared latent representation that connects rich image features with patient context, we aim to provide patient-specific understanding of cardiac health beyond existing task-specific models.

Takeaways, Limitations

Takeaways:
It integrates CMR imaging with various patient-level factors to provide a comprehensive understanding of heart health.
A single framework can perform various clinical tasks, including cardiac phenotype prediction, segmentation, and disease classification.
Patient-specific understanding of heart health can improve clinical utility and scalability.
It enables a more comprehensive and accurate assessment of heart health than existing task-specific models.
Limitations:
Since it was trained on UK Biobank data, its generalization performance to other datasets requires further validation.
The complexity of the model may limit its interpretability.
Training and running models can require significant computing resources.
👍