TY - GEN
T1 - Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification
AU - Tapia, Carlos Gómez
AU - Bozic, Bojan
AU - Longo, Luca
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Electroencephalography (EEG) data has emerged as a promising modality for biometric applications, offering unique and secure personal identification and authentication methods. This research comprehensively compared EEG data pre-processing techniques, focusing on biometric applications. In tandem with this, the study illuminates the pivotal role of Explainable Artificial Intelligence (XAI) in enhancing the transparency and interpretability of machine learning models. Notably, integrating XAI methodologies contributes significantly to the evolution of more precise, reliable, and ethically sound machine learning systems. An outstanding test accuracy exceeding 99% was observed within the biometric system, corroborating the Graph Neural Network (GNN) model’s ability to distinguish between individuals. However: high accuracy does not unequivocally signify that models have extracted meaningful features from the EEG data. Despite impressive test accuracy, a fundamental need remains for an in-depth comprehension of the models. Attributions proffer initial insights into the decision-making process. Still, they did not allow us to determine why specific channels are more contributory than others and whether the models have discerned genuine cognitive processing discrepancies. Nevertheless, deploying explainability techniques has amplified system-wide interpretability and revealed that models learned to identify noise patterns to distinguish between individuals. Applying XAI techniques and fostering interdisciplinary partnerships that blend the domain expertise from neuroscience and machine learning is necessary to interpret attributions further and illuminate the models’ decision-making processes.
AB - Electroencephalography (EEG) data has emerged as a promising modality for biometric applications, offering unique and secure personal identification and authentication methods. This research comprehensively compared EEG data pre-processing techniques, focusing on biometric applications. In tandem with this, the study illuminates the pivotal role of Explainable Artificial Intelligence (XAI) in enhancing the transparency and interpretability of machine learning models. Notably, integrating XAI methodologies contributes significantly to the evolution of more precise, reliable, and ethically sound machine learning systems. An outstanding test accuracy exceeding 99% was observed within the biometric system, corroborating the Graph Neural Network (GNN) model’s ability to distinguish between individuals. However: high accuracy does not unequivocally signify that models have extracted meaningful features from the EEG data. Despite impressive test accuracy, a fundamental need remains for an in-depth comprehension of the models. Attributions proffer initial insights into the decision-making process. Still, they did not allow us to determine why specific channels are more contributory than others and whether the models have discerned genuine cognitive processing discrepancies. Nevertheless, deploying explainability techniques has amplified system-wide interpretability and revealed that models learned to identify noise patterns to distinguish between individuals. Applying XAI techniques and fostering interdisciplinary partnerships that blend the domain expertise from neuroscience and machine learning is necessary to interpret attributions further and illuminate the models’ decision-making processes.
KW - Biometrics
KW - Deep Learning
KW - Electroencephalography
KW - Graph-Neural Network
KW - Signal processing
KW - attribution xAI methods
KW - eXplainable Artificial Intelligence
KW - signal-to-noise ratio
UR - http://www.scopus.com/inward/record.url?scp=85175971911&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-44070-0_7
DO - 10.1007/978-3-031-44070-0_7
M3 - Conference contribution
AN - SCOPUS:85175971911
SN - 9783031440694
T3 - Communications in Computer and Information Science
SP - 131
EP - 152
BT - Explainable Artificial Intelligence - 1st World Conference, xAI 2023, Proceedings
A2 - Longo, Luca
PB - Springer Science and Business Media Deutschland GmbH
T2 - 1st World Conference on eXplainable Artificial Intelligence, xAI 2023
Y2 - 26 July 2023 through 28 July 2023
ER -