In interactive computer graphics, Facial Action Coding System (FACS) has been adapted to enhance the emotional expressiveness of Virtual Humans (VHs) by associating certain Action Units (AUs) with corresponding facial blendshapes. In this way, animators can (theoretically) recreate any human emotion on a VH's face with precision and flexibility. However, conveying realistic and believable emotional expressions with this approach comes with some challenges. In particular, given a set of AUs representing a particular emotion, it is not straightforward to define the correct set of blendshape weights that can render the same realistic and believable emotion on all VHs, as even small differences in weight values can drastically change the perceived emotion. This complexity raises several critical questions such as: is there for each emotion a universal set of blendshape weights that can effectively convey that emotion across all VHs? How can this set be found? If such a universal set proves elusive, can optimal combinations be identified for specific subgroups based on specific facial features such as men and women? Answering these questions is critical to understanding the general applicability of FACS-based facial emotion coding, which allows designers and animators to easily develop VHs that are able to interact with users in a way that is both emotionally rich and authentic. This paper explores these issues through a preliminary investigation aimed at defining realistic representations of happiness.

The quest for believability: exploring FACS adaptations for emotion facial expressions in virtual humans / Calzolari, Stefano; Strada, Francesco; Bottino, Andrea. - ELETTRONICO. - (In corso di stampa). (Intervento presentato al convegno IEEE CTSoc Gaming, Entertainment and Media tenutosi a Torino nel 5-7 Giugno 2024).

The quest for believability: exploring FACS adaptations for emotion facial expressions in virtual humans

Stefano Calzolari;Francesco Strada;Andrea Bottino
In corso di stampa

Abstract

In interactive computer graphics, Facial Action Coding System (FACS) has been adapted to enhance the emotional expressiveness of Virtual Humans (VHs) by associating certain Action Units (AUs) with corresponding facial blendshapes. In this way, animators can (theoretically) recreate any human emotion on a VH's face with precision and flexibility. However, conveying realistic and believable emotional expressions with this approach comes with some challenges. In particular, given a set of AUs representing a particular emotion, it is not straightforward to define the correct set of blendshape weights that can render the same realistic and believable emotion on all VHs, as even small differences in weight values can drastically change the perceived emotion. This complexity raises several critical questions such as: is there for each emotion a universal set of blendshape weights that can effectively convey that emotion across all VHs? How can this set be found? If such a universal set proves elusive, can optimal combinations be identified for specific subgroups based on specific facial features such as men and women? Answering these questions is critical to understanding the general applicability of FACS-based facial emotion coding, which allows designers and animators to easily develop VHs that are able to interact with users in a way that is both emotionally rich and authentic. This paper explores these issues through a preliminary investigation aimed at defining realistic representations of happiness.
In corso di stampa
File in questo prodotto:
File Dimensione Formato  
IEEE_GEM___Facial_expressions__Stefano_.pdf

non disponibili

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 6.72 MB
Formato Adobe PDF
6.72 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2987585