Recognizing emotions in non-verbal audio tracks requires a deep understanding of their underlying features. Traditional classifiers relying on excitation, prosodic, and vocal traction features are not always capable of effectively generalizing across speakers' genders. In the ComParE 2022 vocalisation sub-challenge we explore the use of a Transformer architecture trained on contrastive audio examples. We leverage augmented data to learn robust non-verbal emotion classifiers. We also investigate the impact of different audio transformations, including neural voice conversion, on the classifier capability to generalize across speakers' genders. The empirical findings indicate that neural voice conversion is beneficial in the pretraining phase, yielding an improved model generality, whereas is harmful at the finetuning stage as hinders model specialization for the task of non-verbal emotion recognition.

Transformer-based Non-Verbal Emotion Recognition: Exploring Model Portability across Speakers’ Genders / Vaiani, Lorenzo; Koudounas, Alkis; LA QUATRA, Moreno; Cagliero, Luca; Garza, Paolo; Baralis, ELENA MARIA. - ELETTRONICO. - (2022), pp. 89-94. (Intervento presentato al convegno Multimodal Sentiment Analysis Challenge (MuSe 2022) tenutosi a Lisbon (PT) nel October 10 2022) [10.1145/3551876.3554801].

Transformer-based Non-Verbal Emotion Recognition: Exploring Model Portability across Speakers’ Genders

Lorenzo Vaiani;Alkis Koudounas;Moreno La Quatra;Luca Cagliero;Paolo Garza;Elena BAralis
2022

Abstract

Recognizing emotions in non-verbal audio tracks requires a deep understanding of their underlying features. Traditional classifiers relying on excitation, prosodic, and vocal traction features are not always capable of effectively generalizing across speakers' genders. In the ComParE 2022 vocalisation sub-challenge we explore the use of a Transformer architecture trained on contrastive audio examples. We leverage augmented data to learn robust non-verbal emotion classifiers. We also investigate the impact of different audio transformations, including neural voice conversion, on the classifier capability to generalize across speakers' genders. The empirical findings indicate that neural voice conversion is beneficial in the pretraining phase, yielding an improved model generality, whereas is harmful at the finetuning stage as hinders model specialization for the task of non-verbal emotion recognition.
File in questo prodotto:
File Dimensione Formato  
ComParE2022_Vocalisation_Challenge (2).pdf

accesso aperto

Descrizione: post-print
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 581.98 kB
Formato Adobe PDF
581.98 kB Adobe PDF Visualizza/Apri
3551876.3554801.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 1.02 MB
Formato Adobe PDF
1.02 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2971156