Limited diversity in standardized benchmarks for evaluating audio representation learning (ARL) methods may hinder systematic comparison of current methods’ capabilities. We present ARCH, a comprehensive benchmark for evaluating ARL methods on diverse audio classification domains, covering acoustic events, music, and speech. ARCH comprises 12 datasets, that allow us to thoroughly assess pre-trained SSL models of different sizes. ARCH streamlines benchmarking of ARL techniques through its unified access to a wide range of domains and its ability to readily incorporate new datasets and models. To address the current lack of open-source, pre-trained models for non-speech audio, we also release new pre-trained models that demonstrate strong performance on non-speech datasets. We argue that the presented wide-ranging evaluation provides valuable insights into state-of-the-art ARL methods, and is useful to pinpoint promising research directions.

Benchmarking Representations for Speech, Music, and Acoustic Events / LA QUATRA, Moreno; Koudounas, Alkis; Vaiani, Lorenzo; Baralis, Elena; Cagliero, Luca; Garza, Paolo; Marco Siniscalchi, Sabato. - (2024), pp. 505-509. (Intervento presentato al convegno 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) tenutosi a Seoul (KOR) nel 14-19 April 2024) [10.1109/ICASSPW62465.2024.10625960].

Benchmarking Representations for Speech, Music, and Acoustic Events

Moreno La Quatra;Alkis Koudounas;Lorenzo Vaiani;Elena Baralis;Luca Cagliero;Paolo Garza;
2024

Abstract

Limited diversity in standardized benchmarks for evaluating audio representation learning (ARL) methods may hinder systematic comparison of current methods’ capabilities. We present ARCH, a comprehensive benchmark for evaluating ARL methods on diverse audio classification domains, covering acoustic events, music, and speech. ARCH comprises 12 datasets, that allow us to thoroughly assess pre-trained SSL models of different sizes. ARCH streamlines benchmarking of ARL techniques through its unified access to a wide range of domains and its ability to readily incorporate new datasets and models. To address the current lack of open-source, pre-trained models for non-speech audio, we also release new pre-trained models that demonstrate strong performance on non-speech datasets. We argue that the presented wide-ranging evaluation provides valuable insights into state-of-the-art ARL methods, and is useful to pinpoint promising research directions.
2024
979-8-3503-7451-3
File in questo prodotto:
File Dimensione Formato  
2405.00934v1.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 106.56 kB
Formato Adobe PDF
106.56 kB Adobe PDF Visualizza/Apri
Benchmarking_Representations_for_Speech_Music_and_Acoustic_Events.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 844.19 kB
Formato Adobe PDF
844.19 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2990377