Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

XSRD-Net: EXplainable Stroke Relapse Detection

Created by
  • Haebom

Author

Christian Gapp, Elias Tappeiner, Martin Welk, Karl Fritscher, Stephanie Mangesius, Constantin Eisenschink, Philipp Deisl, Michael Knoflach, Astrid E. Grams, Elke R. Gizewski, Rainer Schubert

Outline

This paper presents a deep learning-based predictive model that leverages 3D intracranial CTA image data and information on cardiac disease, age, and gender to detect stroke recurrence early and establish appropriate treatment plans. Based on stroke patient data from 2010 to 2024, we trained unimodal and multimodal deep learning neural networks to perform binary classification of recurrence (Task 1) and prediction and classification of recurrence-free survival (RFS) (Task 2). In Task 1, the model achieved a high area under the curve (AUC) of 0.84 using only tabular data. In the main task, Task 2 (regression), the multimodal XSRD-net achieved a c-index of 0.68 and an AUC of 0.71 using image and tabular data in a ratio of 0.68:0.32. Interpretability analysis suggested that the association between cardiac disease (tabular data) and carotid artery (image data) is an important factor in recurrence detection and RFS prediction. We plan to strengthen this relationship through additional data collection and model retraining.

Takeaways, Limitations

Takeaways:
We present the effectiveness of a multimodal deep learning model for predicting stroke recurrence risk.
Suggests that heart disease and carotid artery conditions are closely related to stroke recurrence.
Laying the foundation for early diagnosis and appropriate treatment planning.
Limitations:
The performance of the multimodal model (c-index 0.68, AUC 0.71) is not yet perfect.
Performance improvement is needed through more data collection and model retraining.
The results of the model's interpretability analysis are still in the early stages and further research is needed.
👍