A significant amount of past literature has shown that it is difficult to leverage plain-text reviews to improve recommendation effectiveness. Since then, Large Language Models (LLMs) have shown unprecedented ability to capture natural language semantics, which has been applied to multiple domains with good results. However, re-purposing them for recommendation is not straightforward, due to their high computational cost and the risk of hallucinations. For these reasons, rather than using LLMs as models to directly generate recommendations, we investigate if LLM embeddings of plain-text reviews can be a useful input to improve the quality of traditional review-based recommendation algorithms, by adapting their architecture to process said embeddings rather than word-level ones. We structure an empirical analysis using two Amazon Review Datasets and three LLMs to produce embeddings: OpenAI, Wang’s Mistral and VoyageAI. The results show that LLM embeddings can be effectively used in review-based models developed for word-level embeddings, yet one baseline model still achieves greater accuracy.

Pre-Trained LLM Embeddings of Product Reviews for Recommendation / Pisani, A.; Cecere, N.; Dacrema, M. F.; Cremonesi, P.. - 3802:(2024), pp. 91-94. (Intervento presentato al convegno 14th Italian Information Retrieval Workshop, IIR 2024 tenutosi a Udine (IT) nel September 5-6, 2024).

Pre-Trained LLM Embeddings of Product Reviews for Recommendation

Pisani A.;
2024

Abstract

A significant amount of past literature has shown that it is difficult to leverage plain-text reviews to improve recommendation effectiveness. Since then, Large Language Models (LLMs) have shown unprecedented ability to capture natural language semantics, which has been applied to multiple domains with good results. However, re-purposing them for recommendation is not straightforward, due to their high computational cost and the risk of hallucinations. For these reasons, rather than using LLMs as models to directly generate recommendations, we investigate if LLM embeddings of plain-text reviews can be a useful input to improve the quality of traditional review-based recommendation algorithms, by adapting their architecture to process said embeddings rather than word-level ones. We structure an empirical analysis using two Amazon Review Datasets and three LLMs to produce embeddings: OpenAI, Wang’s Mistral and VoyageAI. The results show that LLM embeddings can be effectively used in review-based models developed for word-level embeddings, yet one baseline model still achieves greater accuracy.
2024
File in questo prodotto:
File Dimensione Formato  
2024_IIR_Pisani_et_al_Pre_Trained_LLM_Embeddings_of_Product_Reviews_for_Recommendation.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 209.77 kB
Formato Adobe PDF
209.77 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3004191