AI-based compression is gaining popularity for traditional photos and videos. However, such techniques do not typically scale well to the task of compressing hyperspectral images, and may have computational requirements in terms of memory usage and total floating point operations that are prohibitive for usage onboard of satellites. In this paper, we explore the design of a predictive compression method based on a novel neural network design, called LineRWKV. Our neural network predictor works in a line-by-line fashion limiting memory and computational requirements thanks to a recurrent inference mechanism. However, in contrast to classic recurrent networks, it relies on an attention operation that can be parallelized for training, akin to Transformers, unlocking efficient training on large datasets, which is critical to learn complex predictors. In our preliminary results, we show that LineRWKV significantly outperforms the state-of-the-art CCSDS-123 standard and has competitive throughput.

Hybrid Recurrent-Attentive Neural Network for Onboard Predictive Hyperspectral Image Compression / Valsesia, Diego; Bianchi, Tiziano; Magli, Enrico. - ELETTRONICO. - (2024), pp. 7898-7902. (Intervento presentato al convegno IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium tenutosi a Athens (Greece) nel 07-12 July 2024) [10.1109/igarss53475.2024.10641584].

Hybrid Recurrent-Attentive Neural Network for Onboard Predictive Hyperspectral Image Compression

Valsesia, Diego;Bianchi, Tiziano;Magli, Enrico
2024

Abstract

AI-based compression is gaining popularity for traditional photos and videos. However, such techniques do not typically scale well to the task of compressing hyperspectral images, and may have computational requirements in terms of memory usage and total floating point operations that are prohibitive for usage onboard of satellites. In this paper, we explore the design of a predictive compression method based on a novel neural network design, called LineRWKV. Our neural network predictor works in a line-by-line fashion limiting memory and computational requirements thanks to a recurrent inference mechanism. However, in contrast to classic recurrent networks, it relies on an attention operation that can be parallelized for training, akin to Transformers, unlocking efficient training on large datasets, which is critical to learn complex predictors. In our preliminary results, we show that LineRWKV significantly outperforms the state-of-the-art CCSDS-123 standard and has competitive throughput.
2024
979-8-3503-6032-5
File in questo prodotto:
File Dimensione Formato  
RKWV_predictive_compression-7.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 218.51 kB
Formato Adobe PDF
218.51 kB Adobe PDF Visualizza/Apri
Hybrid_Recurrent-Attentive_Neural_Network_for_Onboard_Predictive_Hyperspectral_Image_Compression.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 925.27 kB
Formato Adobe PDF
925.27 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2992848