In this study a Deep Learning (DL) based-Brain-Computer Interface (BCI) system able to automatically detect and decode voluntary eye blinks from the analysis of electroen-cephalographic (EEG) signals is proposed for controlling, in principle, an external device by means of ocular movements. To this end, a Convolutional Neural Network (CNN) is developed to classify EEG recordings related to natural (or involuntary) blinks, forced (or voluntary) blinks and baseline (no blinks) category. The proposed system achieved an impressive average classification performance: accuracy rate up to 99.4% +/- 1.3%. However, the core of the present study was to investigate the explainability and interpretability of the proposed CNN with the ultimate aim of explore which segments of the EEG signal is the most relevant in the voluntary/involuntary blink discrimination process. To this end, explainable Artificial Intelligence (xAI) techniques were applied. Specifically, the Gradient-weighted Class Activation Mapping (Grad-CAM) and the Local Interpretable Model Agnostic Explanation (LIME) algorithms were used. xAI allowed us to visually identify the most relevant EEG areas especially for the voluntary and involuntary blink detection. Indeed, limited to the analyzed dataset, for natural blinks, the discriminating region was the interval ranged from the temporal instant the eye was closed till the following instants of the reopening (vice-versa for voluntary blinks). The baseline (no blink), on the other hand, was characterized by a low activation threshold throughout the EEG segment.
File in questo prodotto:
Non ci sono file associati a questo prodotto.