Mostrar el registro sencillo del ítem

dc.contributor.authorGuillén, Pablo
dc.contributor.authorFrías-Martínez, Enrique
dc.date2026
dc.date.accessioned2026-02-23T09:48:00Z
dc.date.available2026-02-23T09:48:00Z
dc.identifier.citationGuillén, P. y Frías-Martínez, E. (2026). Enhancing SHAP Explainability for Diagnostic and Prognostic ML Models in Alzheimer’s Disease. Computers, Materials & Continua. https://doi.org/10.32604/cmc.2026.076400es_ES
dc.identifier.issn1546-2218
dc.identifier.issn1546-2226
dc.identifier.urihttps://reunir.unir.net/handle/123456789/19041
dc.description.abstractAlzheimer’s disease (AD) diagnosis and prognosis increasingly rely on machine learning (ML) models. Although these models provide good results, clinical adoption is limited by the need for technical expertise and the lack of trustworthy and consistent model explanations. SHAP (SHapley Additive exPlanations) is commonly used to interpret AD models, but existing studies tend to focus on explanations for isolated tasks, providing little evidence about their robustness across disease stages, model architectures, or prediction objectives. This paper proposes a multi-level explainability framework that measures the coherence, stability and consistency of explanations by integrating: (1) within-model coherence metrics between feature importance and SHAP, (2) SHAP stability across AD boundaries, and (3) SHAP cross-task consistency between diagnosis and prognosis. Using AutoML to optimize classifiers on the NACC dataset, we trained four diagnostic and four prognostic models covering the standard AD progression stages: normal-control (NC), mild-cognitive impairment (NCI) and AD. For each model, we generated SHAP and feature importance (FI) plots. Stability was then evaluated using correlation metrics (Spearman, Kendall), top-k feature overlap (Jaccard@10/20), SHAP sign consistency, and domain-level contribution ratios. Results show that cognitive and functional markers (e.g., MEMORY, JUDGMENT, ORIENT, PAYATTN) dominate SHAP explanations in both diagnosis and prognosis. SHAP-SHAP consistency between diagnostic and prognostic models was high across all classifiers (ρ = 0.61–0.94), with 100% sign stability and minimal shifts in explanatory magnitude (mean Δ|SHAP| < 0.03). Domain-level contributions also remained stable, with only minimal increases in genetic features for prognosis. These results demonstrate that SHAP explanations can be quantitatively validated for robustness and transferability, providing clinicians with more reliable interpretations of ML predictions. The proposed framework provides a reproducible methodology for evaluating explainability stability and coherence, supporting the deployment of trustworthy ML systems in AD clinical settings.es_ES
dc.language.isoen_USes_ES
dc.publisherComputers, Materials & Continuaes_ES
dc.relation.urihttps://www.techscience.com/cmc/online/detail/25938es_ES
dc.rightsopenAccesses_ES
dc.subjectAlzheimer’s disease (AD)es_ES
dc.subjectautomated machine learning (AutoML)es_ES
dc.subjectPycaretes_ES
dc.subjectSHAPes_ES
dc.titleEnhancing SHAP Explainability for Diagnostic and Prognostic ML Models in Alzheimer’s Diseasees_ES
dc.typearticlees_ES
reunir.tag~OPUes_ES
dc.identifier.doihttps://doi.org/10.32604/cmc.2026.076400


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem