Abstract
Differentiating multiple sclerosis (MS) from ischemic stroke lesions on MRI remains a clinical challenge due to their similar appearances as white matter hyperintensities. We propose a radiomics-based machine learning framework that integrates multi-level explainable AI (XAI) techniques to support transparent and clinically meaningful lesion classification. Radiomic features are extracted from standardized MRI scans and used to train multiple classifiers, with Random Forest achieving the best performance (accuracy: 91.24%, F1: 86.54%). The framework incorporates four complementary explanation layers: global insights using SHAP, local interpretability via LIME, counterfactual reasoning with DiCE, and clinical narrative generation using GPT-based language models. This layered approach enhances interpretability at both dataset and lesion levels, enabling clinicians to understand, trust, and act upon model outputs. A radiologist who reviewed the results found the explanations helpful and confirmed that the overall analysis was clinically meaningful. Our results demonstrate the value of combining radiomics and advanced XAI techniques for differential diagnosis of brain lesions.
| Original language | English |
|---|---|
| Pages (from-to) | 233-240 |
| Number of pages | 8 |
| Journal | CEUR Workshop Proceedings |
| Volume | 4017 |
| Publication status | Published - 2025 |
| Event | Joint of the xAI 2025 Late-Breaking Work, Demos and Doctoral Consortium, LB/D/DC@xAI 2025 - Istanbul, Turkey Duration: 9 Jul 2025 → 11 Jul 2025 |
Keywords
- Brain Lesions
- Clinical Decision Support
- Counterfactual Explanations
- Explainable AI
- GPT Narratives
- Ischemic Stroke
- LIME
- MRI
- Multiple Sclerosis
- Radiomics
- SHAP