The analysis of Surface-Enhanced Raman Spectroscopy (SERS) data is a complex challenge, often limited by spectral noise and the inherent variability of samples. To address this, we introduced in a previous work a novel approach utilizing a specific autoencoder architecture, the Convolutional 1D Autoencoder (Conv-1D AE), Here we propose to utilize a distinct autoencoder architecture: the Transformer Autoencoder (Transformer AE). While the Conv-1D AE successfully performs chemically-aware clustering based on local spectral patterns, our work highlights a new perspective offered by the Transformer AE. This model, with its unique "attention mechanism," demonstrates an advanced capability to move beyond simple feature extraction. The Transformer AE learns to identify subtle and non-obvious correlations between a wide range of metabolites, often grouping molecules that lack a single, shared functional group. Its reconstructed pseudospectra reveal an intriguing logic, accurately capturing essential peak information while filtering out irrelevant noise. This study provides a comprehensive comparative analysis of both models' clustering outputs and their respective pseudospectra, demonstrating that the Transformer AE serves as a powerful new lens for deciphering SERS data. The findings validate its effectiveness as a complementary tool for chemical analysis, capable of revealing hidden connections and providing deeper insights into complex biological systems.
AI's New Lens: Transformer Autoencoders Unveil Hidden Connections in SERS Metabolite Spectra / Sparavigna, Amelia Carolina. - ELETTRONICO. - (2025). [10.5281/zenodo.17021372]
AI's New Lens: Transformer Autoencoders Unveil Hidden Connections in SERS Metabolite Spectra
Amelia Carolina Sparavigna
2025
Abstract
The analysis of Surface-Enhanced Raman Spectroscopy (SERS) data is a complex challenge, often limited by spectral noise and the inherent variability of samples. To address this, we introduced in a previous work a novel approach utilizing a specific autoencoder architecture, the Convolutional 1D Autoencoder (Conv-1D AE), Here we propose to utilize a distinct autoencoder architecture: the Transformer Autoencoder (Transformer AE). While the Conv-1D AE successfully performs chemically-aware clustering based on local spectral patterns, our work highlights a new perspective offered by the Transformer AE. This model, with its unique "attention mechanism," demonstrates an advanced capability to move beyond simple feature extraction. The Transformer AE learns to identify subtle and non-obvious correlations between a wide range of metabolites, often grouping molecules that lack a single, shared functional group. Its reconstructed pseudospectra reveal an intriguing logic, accurately capturing essential peak information while filtering out irrelevant noise. This study provides a comprehensive comparative analysis of both models' clustering outputs and their respective pseudospectra, demonstrating that the Transformer AE serves as a powerful new lens for deciphering SERS data. The findings validate its effectiveness as a complementary tool for chemical analysis, capable of revealing hidden connections and providing deeper insights into complex biological systems.File | Dimensione | Formato | |
---|---|---|---|
transf-AE.pdf
accesso aperto
Tipologia:
1. Preprint / submitted version [pre- review]
Licenza:
Creative commons
Dimensione
882.7 kB
Formato
Adobe PDF
|
882.7 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3002702