Latent space interpretation and visualisation for understanding the decisions of convolutional variational autoencoders trained with EEG topographic maps

Taufique Ahmed, Luca Longo

Research output: Contribution to journalConference articlepeer-review

Abstract

Learning essential features and forming simple representations of electroencephalography (EEG) signals are difficult problems. Variational autoencoders (VAEs) can be used with EEG signals to learn the salient features of EEG data. But explainability should disclose knowledge of how the model makes its decision. The key contribution of this research is the combining of known components in a pipeline that allows us to give meaningful visualisations that help us understand which component of latent space is responsible for capturing which region of brain activation in EEG topographic maps. The results reveal that each component in the latent space contributes to capturing at least two generating factors in topographic maps. This pipeline can be used to produce EEG topographic maps of any scale. Furthermore, assist us in understanding each component of latent space responsible for activating a portion of the brain.

Original languageEnglish
Pages (from-to)65-70
Number of pages6
JournalCEUR Workshop Proceedings
Volume3554
Publication statusPublished - 2023
EventJoint 1st World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium, xAI-2023: LB-D-DC - Lisbon, Portugal
Duration: 26 Jul 202328 Jul 2023

Keywords

  • Convolutional variational autoencoder
  • Electroencephalography
  • deep learning
  • latent space interpretation
  • spectral topographic maps

Fingerprint

Dive into the research topics of 'Latent space interpretation and visualisation for understanding the decisions of convolutional variational autoencoders trained with EEG topographic maps'. Together they form a unique fingerprint.

Cite this