This article presents a brain-computer interface (BCI) coupled with an augmented reality (AR) system to support human-robot interaction in controlling a robotic arm for pick-and-place tasks. BCIs can process steady-state visual evoked potentials (SSVEPs), which are signals generated through visual stimuli. The visual stimuli may be conveyed to the user with AR systems, expanding the range of possible applications. The proposed approach leverages the capabilities of the NextMind BCI to enable users to select objects in the range of the robotic arm. By displaying a visual anchor associated with each object in the scene with projected AR, the NextMind device can detect when users focus their eyesight on one of them, thus triggering the pick-up action of the robotic arm. The proposed system has been designed considering the needs and limitations of mobility-impaired people to support them when controlling a robotic arm for pick-and-place tasks. Two different approaches for positioning the visual anchors are proposed and analyzed. Experimental tests involving users show that both approaches are highly appreciated. The system performances are extremely robust, thus allowing the users to select objects in an easy, fast, and reliable way.

Supporting Human–Robot Interaction by Projected Augmented Reality and a Brain Interface / De Pace, Francesco; Manuri, Federico; Bosco, Matteo; Sanna, Andrea; Kaufmann, Hannes. - In: IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS. - ISSN 2168-2291. - ELETTRONICO. - (2024), pp. 1-10. [10.1109/thms.2024.3414208]

Supporting Human–Robot Interaction by Projected Augmented Reality and a Brain Interface

De Pace, Francesco;Manuri, Federico;Bosco, Matteo;Sanna, Andrea;
2024

Abstract

This article presents a brain-computer interface (BCI) coupled with an augmented reality (AR) system to support human-robot interaction in controlling a robotic arm for pick-and-place tasks. BCIs can process steady-state visual evoked potentials (SSVEPs), which are signals generated through visual stimuli. The visual stimuli may be conveyed to the user with AR systems, expanding the range of possible applications. The proposed approach leverages the capabilities of the NextMind BCI to enable users to select objects in the range of the robotic arm. By displaying a visual anchor associated with each object in the scene with projected AR, the NextMind device can detect when users focus their eyesight on one of them, thus triggering the pick-up action of the robotic arm. The proposed system has been designed considering the needs and limitations of mobility-impaired people to support them when controlling a robotic arm for pick-and-place tasks. Two different approaches for positioning the visual anchors are proposed and analyzed. Experimental tests involving users show that both approaches are highly appreciated. The system performances are extremely robust, thus allowing the users to select objects in an easy, fast, and reliable way.
File in questo prodotto:
File Dimensione Formato  
10581874.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Creative commons
Dimensione 2 MB
Formato Adobe PDF
2 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2990384