The word embedding models have significantly impacted Natural Language Processing, particularly in multilingual settings. The well-known fastText approach proposed a straightforward and revolutionary way to generate embeddings for out-of-vocabulary (OOV) words by using subword information, allowing the generation of embeddings for words not present in the initial training set. In addition, other methods tried to align word embeddings across different languages, facilitating a multilingual setting that enhances cross-lingual natural language processing tasks. However, the fastText-aligned embeddings do not allow alignment of the subword information. This work addresses this challenge by enhancing the fastText model, introducing aligned subword vectors, thereby improving fastText’s capacity to handle OOV terms in a cross-lingual context. We propose a method to reconstruct the alignment matrix from known word embeddings and apply this matrix to subword vectors, enabling the generation of cross-lingually aligned embeddings for OOV terms. We then demonstrate the accuracy of our alignment matrix reconstruction and the effectiveness of our approach in producing meaningful embeddings for previously unseen words across multiple languages, as well as show the possible limitations of the aligned subword embeddings.

Enhancing Cross-Lingual Word Embeddings: Aligned Subword Vectors for Out-of-Vocabulary Terms in fastText / Savelli, Claudio; Giobergia, Flavio. - (2024). (Intervento presentato al convegno 2024 IEEE 18th International Conference on Application of Information and Communication Technologies (AICT) tenutosi a Turin (ITA) nel 25-27 September 2024) [10.1109/AICT61888.2024.10740438].

Enhancing Cross-Lingual Word Embeddings: Aligned Subword Vectors for Out-of-Vocabulary Terms in fastText

Savelli Claudio;Giobergia Flavio
2024

Abstract

The word embedding models have significantly impacted Natural Language Processing, particularly in multilingual settings. The well-known fastText approach proposed a straightforward and revolutionary way to generate embeddings for out-of-vocabulary (OOV) words by using subword information, allowing the generation of embeddings for words not present in the initial training set. In addition, other methods tried to align word embeddings across different languages, facilitating a multilingual setting that enhances cross-lingual natural language processing tasks. However, the fastText-aligned embeddings do not allow alignment of the subword information. This work addresses this challenge by enhancing the fastText model, introducing aligned subword vectors, thereby improving fastText’s capacity to handle OOV terms in a cross-lingual context. We propose a method to reconstruct the alignment matrix from known word embeddings and apply this matrix to subword vectors, enabling the generation of cross-lingually aligned embeddings for OOV terms. We then demonstrate the accuracy of our alignment matrix reconstruction and the effectiveness of our approach in producing meaningful embeddings for previously unseen words across multiple languages, as well as show the possible limitations of the aligned subword embeddings.
2024
979-8-3503-8753-7
File in questo prodotto:
File Dimensione Formato  
Enhancing_Cross-Lingual_Word_Embeddings_Aligned_Subword_Vectors_for_Out-of-Vocabulary_Terms_in_fastText.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 346.09 kB
Formato Adobe PDF
346.09 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2995237