Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

ProtoMedX: Towards Explainable Multi-Modal Prototype Learning for Bone Health Classification

Created by
  • Haebom

Author

Alvaro Lopez Pellicer, Andre Mariucci, Plamen Angelov, Marwan Bukhari, Jemma G. Kerns

Outline

This paper addresses the application of artificial intelligence (AI) in bone health research, specifically proposing the ProtoMedX model for the early diagnosis of osteopenia and osteoporosis. ProtoMedX is a multimodal model that utilizes both DEXA scans of the lumbar spine and patient records, and features an explainable architecture. This allows for a clear analysis of the model's decision-making processes in healthcare, particularly in the context of the EU AI Act. Using data from 4,160 NHS patients, ProtoMedX outperformed existing methods, achieving an accuracy of 87.58% when using only visual data and 89.8% in a multimodal approach.

Takeaways, Limitations

Takeaways:
Increasing the reliability of AI in healthcare by developing explainable AI models.
Improved accuracy through a multimodal approach combining DEXA scans and patient records.
Validating model performance using real NHS patient data.
Highlighting the importance of explainable AI in regulatory environments such as the EU AI Act.
Limitations:
The paper itself does not specifically mention Limitations. (Respond solely to what is stated in the Abstract.)
Further research is needed to determine the generalizability of the model.
There is a need to evaluate the performance of the model in various patient populations.
Additional information is needed on how specific explanation possibilities are implemented and their clinical applications.
👍