In this study a Deep Learning (DL) based-Brain-Computer Interface (BCI) system able to automatically detect and decode voluntary eye blinks from the analysis of electroen-cephalographic (EEG) signals is proposed for controlling, in principle, an external device by means of ocular movements. To this end, a Convolutional Neural Network (CNN) is developed to classify EEG recordings related to natural (or involuntary) blinks, forced (or voluntary) blinks and baseline (no blinks) category. The proposed system achieved an impressive average classification performance: accuracy rate up to 99.4% +/- 1.3%. However, the core of the present study was to investigate the explainability and interpretability of the proposed CNN with the ultimate aim of explore which segments of the EEG signal is the most relevant in the voluntary/involuntary blink discrimination process. To this end, explainable Artificial Intelligence (xAI) techniques were applied. Specifically, the Gradient-weighted Class Activation Mapping (Grad-CAM) and the Local Interpretable Model Agnostic Explanation (LIME) algorithms were used. xAI allowed us to visually identify the most relevant EEG areas especially for the voluntary and involuntary blink detection. Indeed, limited to the analyzed dataset, for natural blinks, the discriminating region was the interval ranged from the temporal instant the eye was closed till the following instants of the reopening (vice-versa for voluntary blinks). The baseline (no blink), on the other hand, was characterized by a low activation threshold throughout the EEG segment.

Visual Explanations of Deep Convolutional Neural Network for eye blinks detection in EEG-based BCI applications

Mammone N.
Membro del Collaboration Group
;
Ieracitano C.
Membro del Collaboration Group
;
Campolo M.
Membro del Collaboration Group
;
Morabito F. C.
Membro del Collaboration Group
2022-01-01

Abstract

In this study a Deep Learning (DL) based-Brain-Computer Interface (BCI) system able to automatically detect and decode voluntary eye blinks from the analysis of electroen-cephalographic (EEG) signals is proposed for controlling, in principle, an external device by means of ocular movements. To this end, a Convolutional Neural Network (CNN) is developed to classify EEG recordings related to natural (or involuntary) blinks, forced (or voluntary) blinks and baseline (no blinks) category. The proposed system achieved an impressive average classification performance: accuracy rate up to 99.4% +/- 1.3%. However, the core of the present study was to investigate the explainability and interpretability of the proposed CNN with the ultimate aim of explore which segments of the EEG signal is the most relevant in the voluntary/involuntary blink discrimination process. To this end, explainable Artificial Intelligence (xAI) techniques were applied. Specifically, the Gradient-weighted Class Activation Mapping (Grad-CAM) and the Local Interpretable Model Agnostic Explanation (LIME) algorithms were used. xAI allowed us to visually identify the most relevant EEG areas especially for the voluntary and involuntary blink detection. Indeed, limited to the analyzed dataset, for natural blinks, the discriminating region was the interval ranged from the temporal instant the eye was closed till the following instants of the reopening (vice-versa for voluntary blinks). The baseline (no blink), on the other hand, was characterized by a low activation threshold throughout the EEG segment.
2022
978-1-7281-8671-9
Brain Computer Interface
Convolutional Neural Network
Electroencephalography
explainable Artificial Intelligence
Eye blink
Grad-CAM
Lime
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12318/137388
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact