TY - GEN
T1 - CNN-Based Explanation Ensembling for Dataset, Representation and Explanations Evaluation
AU - Hryniewska-Guzik, Weronika
AU - Longo, Luca
AU - Biecek, Przemysław
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Explainable Artificial Intelligence has gained significant attention due to the widespread use of complex deep learning models in high-stake domains such as medicine, finance, and autonomous cars. However, different explanations often present different aspects of the model’s behavior. In this research manuscript, we explore the potential of ensembling explanations generated by deep classification models using convolutional model. Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model’s behavior, leading to the possibility of evaluating the representation learned by the model. With our method, we can uncover problems of under-representation of images in a certain class. Moreover, we discuss other side benefits like features’ reduction by replacing the original image with its explanations resulting in the removal of some sensitive information. Through the use of carefully selected evaluation metrics from the Quantus library, we demonstrated the method’s superior performance in terms of Localisation and Faithfulness, compared to individual explanations.
AB - Explainable Artificial Intelligence has gained significant attention due to the widespread use of complex deep learning models in high-stake domains such as medicine, finance, and autonomous cars. However, different explanations often present different aspects of the model’s behavior. In this research manuscript, we explore the potential of ensembling explanations generated by deep classification models using convolutional model. Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model’s behavior, leading to the possibility of evaluating the representation learned by the model. With our method, we can uncover problems of under-representation of images in a certain class. Moreover, we discuss other side benefits like features’ reduction by replacing the original image with its explanations resulting in the removal of some sensitive information. Through the use of carefully selected evaluation metrics from the Quantus library, we demonstrated the method’s superior performance in terms of Localisation and Faithfulness, compared to individual explanations.
KW - Convolutional Neural Network
KW - data evaluation
KW - ensemble
KW - Explainable Artificial Intelligence
KW - model evaluation
KW - representation learning
UR - http://www.scopus.com/inward/record.url?scp=85200664317&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-63797-1_18
DO - 10.1007/978-3-031-63797-1_18
M3 - Conference contribution
AN - SCOPUS:85200664317
SN - 9783031637964
T3 - Communications in Computer and Information Science
SP - 346
EP - 368
BT - Explainable Artificial Intelligence - Second World Conference, xAI 2024, Proceedings
A2 - Longo, Luca
A2 - Lapuschkin, Sebastian
A2 - Seifert, Christin
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd World Conference on Explainable Artificial Intelligence, xAI 2024
Y2 - 17 July 2024 through 19 July 2024
ER -