Large Language Models (LLMs) have demonstrated impressive performance across various tasks. However, current training approaches combine standard cross-entropy loss with extensive data, human feedback, or ad hoc methods to enhance performance. These solutions are often not scalable or feasible due to their associated costs, complexity, or resource requirements. This study investigates the use of established semantic segmentation loss functions in natural language generation to create a versatile, practical, and scalable solution for fine-tuning different architectures. We evaluate their effectiveness in solving Math Word Problems and question answering across different models of varying sizes. For the analyzed tasks, we found that the traditional Cross-Entropy loss represents a sub-optimal choice, while models trained to minimize alternative (task-dependent) losses, such as Focal or Lov{\'a}sz, achieve a mean improvement of +36{\%} on exact match without requiring additional data or human feedback. These findings suggest a promising pathway for more efficient and accessible training processes.

Beyond Accuracy Optimization: Computer Vision Losses for Large Language Model Fine-Tuning / Rege Cambrin, Daniele; Gallipoli, Giuseppe; Benedetto, Irene; Cagliero, Luca; Garza, Paolo. - (2024), pp. 12060-12079. (Intervento presentato al convegno The 2024 Conference on Empirical Methods in Natural Language Processing tenutosi a Miami, Florida nel November 12 –16, 2024) [10.18653/v1/2024.findings-emnlp.704].

Beyond Accuracy Optimization: Computer Vision Losses for Large Language Model Fine-Tuning

Rege Cambrin, Daniele;Gallipoli, Giuseppe;Benedetto, Irene;Cagliero, Luca;Garza, Paolo
2024

Abstract

Large Language Models (LLMs) have demonstrated impressive performance across various tasks. However, current training approaches combine standard cross-entropy loss with extensive data, human feedback, or ad hoc methods to enhance performance. These solutions are often not scalable or feasible due to their associated costs, complexity, or resource requirements. This study investigates the use of established semantic segmentation loss functions in natural language generation to create a versatile, practical, and scalable solution for fine-tuning different architectures. We evaluate their effectiveness in solving Math Word Problems and question answering across different models of varying sizes. For the analyzed tasks, we found that the traditional Cross-Entropy loss represents a sub-optimal choice, while models trained to minimize alternative (task-dependent) losses, such as Focal or Lov{\'a}sz, achieve a mean improvement of +36{\%} on exact match without requiring additional data or human feedback. These findings suggest a promising pathway for more efficient and accessible training processes.
File in questo prodotto:
File Dimensione Formato  
2024.findings-emnlp.704.pdf

accesso aperto

Descrizione: Versione editoriale post-print
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 365.19 kB
Formato Adobe PDF
365.19 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2995833