Recent research in Automated Speech Recognition (ASR) has shifted towards using large pre-trained speech models trained on extensive corpora with a Self-Supervised Learning (SSL) approach. These models can transfer general-purpose knowledge to tasks like Speech Emotion Recognition (SER). Due to their highly parameterized architecture, fine-tuning all the weights is computationally inefficient. Consequently, new Parameter-Efficient Fine-Tuning (PEFT) strategies have been explored for the SER task in English. Given the lack of SSL speech models in Italian, current models are either English-only or multilingual, with little effort made to adapt them to SER in Italian. In this work, we investigate transfer learning performance on Italian SER using PEFT strategies, marking the first exploration in this direction. We apply PEFT techniques, such as Low-Rank Adaptation (LoRA) and Adapter, on Italian SER datasets Emozionalmente, DEMoS, and EMOVO. Results show LoRA is the most effective PEFT technique for Italian SER. Speech models pre-trained on large-scale English corpora perform comparably to, or better than, multilingual ones, even when specialized in Italian before the SER task, suggesting some shared paralinguistic features between the languages.

Transfer Learning of Large Speech Models for Italian Speech Emotion Recognition / D’Asaro, Federico; Marquez Villacis, Juan Jose; Rizzo, Giuseppe; Bottino, Andrea. - (2024). (Intervento presentato al convegno 18th IEEE International Conference Application of Information and Communication Technologies 2024 tenutosi a Turin (ITA) nel 25-27 September 2024) [10.1109/AICT61888.2024.10740425].

Transfer Learning of Large Speech Models for Italian Speech Emotion Recognition

D’Asaro, Federico;Rizzo, Giuseppe;Bottino, Andrea
2024

Abstract

Recent research in Automated Speech Recognition (ASR) has shifted towards using large pre-trained speech models trained on extensive corpora with a Self-Supervised Learning (SSL) approach. These models can transfer general-purpose knowledge to tasks like Speech Emotion Recognition (SER). Due to their highly parameterized architecture, fine-tuning all the weights is computationally inefficient. Consequently, new Parameter-Efficient Fine-Tuning (PEFT) strategies have been explored for the SER task in English. Given the lack of SSL speech models in Italian, current models are either English-only or multilingual, with little effort made to adapt them to SER in Italian. In this work, we investigate transfer learning performance on Italian SER using PEFT strategies, marking the first exploration in this direction. We apply PEFT techniques, such as Low-Rank Adaptation (LoRA) and Adapter, on Italian SER datasets Emozionalmente, DEMoS, and EMOVO. Results show LoRA is the most effective PEFT technique for Italian SER. Speech models pre-trained on large-scale English corpora perform comparably to, or better than, multilingual ones, even when specialized in Italian before the SER task, suggesting some shared paralinguistic features between the languages.
2024
979-8-3503-8753-7
File in questo prodotto:
File Dimensione Formato  
manuscript-r2.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 973.5 kB
Formato Adobe PDF
973.5 kB Adobe PDF Visualizza/Apri
Transfer_Learning_of_Large_Speech_Models_for_Italian_Speech_Emotion_Recognition.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1 MB
Formato Adobe PDF
1 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2992502