A multimodal sensory feedback was exploited in the present study to improve the detection of neurological phenomena associated with motor imagery. At this aim, visual and haptic feedback were simultaneously delivered to the user of a brain-computer interface. The motor imagery-based brain-computer interface was built by using a wearable and portable electroencephalograph with only eight dry electrodes, a haptic suit, and a purposely implemented virtual reality application. Preliminary experiments were carried out with six subjects participating in five sessions on different days. The subjects were randomly divided into “control group” and “neurofeedback group”. The former performed pure motor imagery without receiving any feedback, while the latter received multimodal feedback as a response to their imaginative act. Results of a cross validation showed that at most 61% of classification accuracy was achieved in performing the pure motor imagination. On the contrary, subjects of the “neurofeedback group” achieved up to 82% mean accuracy, with a peak of 91% in one of the sessions. However, no improvement in pure motor imagery was observed, either when practicing with pure motor imagery or with feedback.

Multimodal Feedback in Assisting a Wearable Brain-Computer Interface Based on Motor Imagery / Arpaia, Pasquale; Coyle, Damien; Donnarumma, Francesco; Esposito, Antonio; Natalizio, Angela; Parvis, Marco; Pesola, Marisa; Vallefuoco, Ersilia. - ELETTRONICO. - (2022). (Intervento presentato al convegno 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE) tenutosi a Rome, Italy nel 26-28 October 2022) [10.1109/MetroXRAINE54828.2022.9967501].

Multimodal Feedback in Assisting a Wearable Brain-Computer Interface Based on Motor Imagery

Angela, Natalizio;Marco, Parvis;
2022

Abstract

A multimodal sensory feedback was exploited in the present study to improve the detection of neurological phenomena associated with motor imagery. At this aim, visual and haptic feedback were simultaneously delivered to the user of a brain-computer interface. The motor imagery-based brain-computer interface was built by using a wearable and portable electroencephalograph with only eight dry electrodes, a haptic suit, and a purposely implemented virtual reality application. Preliminary experiments were carried out with six subjects participating in five sessions on different days. The subjects were randomly divided into “control group” and “neurofeedback group”. The former performed pure motor imagery without receiving any feedback, while the latter received multimodal feedback as a response to their imaginative act. Results of a cross validation showed that at most 61% of classification accuracy was achieved in performing the pure motor imagination. On the contrary, subjects of the “neurofeedback group” achieved up to 82% mean accuracy, with a peak of 91% in one of the sessions. However, no improvement in pure motor imagery was observed, either when practicing with pure motor imagery or with feedback.
File in questo prodotto:
File Dimensione Formato  
2022 MetroXRAINE - Multimodal_Feedback_in_Assisting_a_Wearable_Brain-Computer_Interface_Based_on_Motor_Imagery.pdf

non disponibili

Descrizione: versione editoriale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 823.71 kB
Formato Adobe PDF
823.71 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
MetroXRAINE__full_paper.pdf

accesso aperto

Descrizione: accepted manuscript
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 463.98 kB
Formato Adobe PDF
463.98 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2973871