Recognizing emotions in non-verbal audio tracks requires a deep understanding of their underlying features. Traditional classifiers relying on excitation, prosodic, and vocal traction features are not always capable of effectively generalizing across speakers' genders. In the ComParE 2022 vocalisation sub-challenge we explore the use of a Transformer architecture trained on contrastive audio examples. We leverage augmented data to learn robust non-verbal emotion classifiers. We also investigate the impact of different audio transformations, including neural voice conversion, on the classifier capability to generalize across speakers' genders. The empirical findings indicate that neural voice conversion is beneficial in the pretraining phase, yielding an improved model generality, whereas is harmful at the finetuning stage as hinders model specialization for the task of non-verbal emotion recognition.

Transformer-based Non-Verbal Emotion Recognition: Exploring Model Portability across Speakers’ Genders / Vaiani, Lorenzo; Koudounas, Alkis; LA QUATRA, Moreno; Cagliero, Luca; Garza, Paolo; Baralis, ELENA MARIA. - ELETTRONICO. - (In corso di stampa). ((Intervento presentato al convegno Multimodal Sentiment Analysis Challenge (MuSe 2022) tenutosi a Lisbon (PT) nel October 10-15, 2022 [10.1145/3551876.3554801].

Transformer-based Non-Verbal Emotion Recognition: Exploring Model Portability across Speakers’ Genders

Lorenzo Vaiani;Alkis Koudounas;Moreno La Quatra;Luca Cagliero;Paolo Garza;Elena BAralis
In corso di stampa

Abstract

Recognizing emotions in non-verbal audio tracks requires a deep understanding of their underlying features. Traditional classifiers relying on excitation, prosodic, and vocal traction features are not always capable of effectively generalizing across speakers' genders. In the ComParE 2022 vocalisation sub-challenge we explore the use of a Transformer architecture trained on contrastive audio examples. We leverage augmented data to learn robust non-verbal emotion classifiers. We also investigate the impact of different audio transformations, including neural voice conversion, on the classifier capability to generalize across speakers' genders. The empirical findings indicate that neural voice conversion is beneficial in the pretraining phase, yielding an improved model generality, whereas is harmful at the finetuning stage as hinders model specialization for the task of non-verbal emotion recognition.
File in questo prodotto:
File Dimensione Formato  
ComParE2022_Vocalisation_Challenge (2).pdf

non disponibili

Descrizione: post-print
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 581.98 kB
Formato Adobe PDF
581.98 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2971156