Abstract
Learning essential features and forming simple representations of electroencephalography (EEG) signals are difficult problems. Variational autoencoders (VAEs) can be used with EEG signals to learn the salient features of EEG data. But explainability should disclose knowledge of how the model makes its decision. The key contribution of this research is the combining of known components in a pipeline that allows us to give meaningful visualisations that help us understand which component of latent space is responsible for capturing which region of brain activation in EEG topographic maps. The results reveal that each component in the latent space contributes to capturing at least two generating factors in topographic maps. This pipeline can be used to produce EEG topographic maps of any scale. Furthermore, assist us in understanding each component of latent space responsible for activating a portion of the brain.
Original language | English |
---|---|
Pages (from-to) | 65-70 |
Number of pages | 6 |
Journal | CEUR Workshop Proceedings |
Volume | 3554 |
Publication status | Published - 2023 |
Event | Joint 1st World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium, xAI-2023: LB-D-DC - Lisbon, Portugal Duration: 26 Jul 2023 → 28 Jul 2023 |
Keywords
- Convolutional variational autoencoder
- Electroencephalography
- deep learning
- latent space interpretation
- spectral topographic maps