Multimodal artificial intelligence promises deeper insights by analyzing data from diverse sources such as text, images, audio and more. However, efficiently processing and fusing large multimodal datasets remains an open challenge. This paper presents a Spark-based approach to parallelize multimodal encoding tasks. A key aspect is the use of late fusion with frozen backbone encoders, allowing encodings to be processed independently across cluster nodes. The encoded vectors can then be used for a variety of supervised and unsupervised tasks, regardless of whether they are gradient-based or not. Experimental results on image, text and audio datasets show that Spark clusters can offer competitive performance compared to GPUs, especially for I/O-intensive modalities. While GPUs outperform the Spark cluster when sufficient CPU cores are available, Spark makes it possible to use already available commodity hardware. The presented architecture demonstrates how distributed computing platforms like Spark can be effectively “repurposed” for multimodal AI, enhancing scalability and making such systems more accessible.

Late Fusion-based Distributed Multimodal Learning / Giobergia, Flavio; Baralis, Elena. - (2023), pp. 2152-2156. (Intervento presentato al convegno 2023 IEEE International Conference on Big Data (BigData) tenutosi a Sorrento (ITA) nel 15-18 December 2023) [10.1109/BigData59044.2023.10386612].

Late Fusion-based Distributed Multimodal Learning

Giobergia, Flavio;Baralis, Elena
2023

Abstract

Multimodal artificial intelligence promises deeper insights by analyzing data from diverse sources such as text, images, audio and more. However, efficiently processing and fusing large multimodal datasets remains an open challenge. This paper presents a Spark-based approach to parallelize multimodal encoding tasks. A key aspect is the use of late fusion with frozen backbone encoders, allowing encodings to be processed independently across cluster nodes. The encoded vectors can then be used for a variety of supervised and unsupervised tasks, regardless of whether they are gradient-based or not. Experimental results on image, text and audio datasets show that Spark clusters can offer competitive performance compared to GPUs, especially for I/O-intensive modalities. While GPUs outperform the Spark cluster when sufficient CPU cores are available, Spark makes it possible to use already available commodity hardware. The presented architecture demonstrates how distributed computing platforms like Spark can be effectively “repurposed” for multimodal AI, enhancing scalability and making such systems more accessible.
2023
979-8-3503-2445-7
File in questo prodotto:
File Dimensione Formato  
Late_Fusion-based_Distributed_Multimodal_Learning.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 319.81 kB
Formato Adobe PDF
319.81 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2988250