With the development of metaverse(s), industry and academia are searching for the best ways to represent users' avatars in shared Virtual Environments (VEs), where real-time communication between users is required. The expressiveness of avatars is crucial for transmitting emotions that are key for social presence and user experience, and are conveyed via verbal and non-verbal facial and body signals. In this paper, two real-time modalities for conveying expressions in Virtual Reality (VR) via realistic, full-body avatars are compared by means of a user study. The first modality uses dedicated hardware (i.e., eye and facial trackers) to allow a mapping between the user’s facial expressions/eye movements and the avatar model. The second modality relies on an algorithm that, starting from an audio clip, approximates the facial motion by generating plausible lip and eye movements. The participants were requested to observe, for both the modalities, the avatar of an actor performing six scenes involving as many basic emotions. The evaluation considered mainly social presence and emotion conveyance. Results showed a clear superiority of facial tracking when compared to lip sync in conveying sadness and disgust. The same was less evident for happiness and fear. No differences were observed for anger and surprise.

Comparing technologies for conveying emotions through realistic avatars in virtual reality-based metaverse experiences / Visconti, Alessandro; Calandra, Davide; Lamberti, Fabrizio. - In: COMPUTER ANIMATION AND VIRTUAL WORLDS. - ISSN 1546-4261. - STAMPA. - 34:3-4(2023). [10.1002/cav.2188]

Comparing technologies for conveying emotions through realistic avatars in virtual reality-based metaverse experiences

Visconti, Alessandro;Calandra, Davide;Lamberti, Fabrizio
2023

Abstract

With the development of metaverse(s), industry and academia are searching for the best ways to represent users' avatars in shared Virtual Environments (VEs), where real-time communication between users is required. The expressiveness of avatars is crucial for transmitting emotions that are key for social presence and user experience, and are conveyed via verbal and non-verbal facial and body signals. In this paper, two real-time modalities for conveying expressions in Virtual Reality (VR) via realistic, full-body avatars are compared by means of a user study. The first modality uses dedicated hardware (i.e., eye and facial trackers) to allow a mapping between the user’s facial expressions/eye movements and the avatar model. The second modality relies on an algorithm that, starting from an audio clip, approximates the facial motion by generating plausible lip and eye movements. The participants were requested to observe, for both the modalities, the avatar of an actor performing six scenes involving as many basic emotions. The evaluation considered mainly social presence and emotion conveyance. Results showed a clear superiority of facial tracking when compared to lip sync in conveying sadness and disgust. The same was less evident for happiness and fear. No differences were observed for anger and surprise.
File in questo prodotto:
File Dimensione Formato  
manuscript.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Creative commons
Dimensione 5.27 MB
Formato Adobe PDF
5.27 MB Adobe PDF Visualizza/Apri
Computer Animation Virtual - 2023 - Visconti - Comparing technologies for conveying emotions through realistic avatars in.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 1.55 MB
Formato Adobe PDF
1.55 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2977754