Existing end-to-end signal compression schemes using neural networks are largely based on an autoencoder-like structure, where a universal encoding function creates a compact latent space and the signal representation in this space is quantized and stored. Recently, advances from the field of 3D graphics have shown the possibility of building implicit representation networks, i.e., neural networks returning the value of a signal at a given query coordinate. In this paper, we propose using neural implicit representations as a novel paradigm for signal compression with neural networks, where the compact representation of the signal is defined by the very weights of the network. We discuss how this compression framework works, how to include priors in the design, and highlight interesting connections with transform coding. While the framework is general, and still lacks maturity, we already show very competitive performance on the task of compressing point cloud attributes, which is notoriously challenging due to the irregularity of the domain, but becomes trivial in the proposed framework.

Signal Compression via Neural Implicit Representations / Pistilli, F.; Valsesia, D.; Fracastoro, G.; Magli, E.. - ELETTRONICO. - May:(2022), pp. 3733-3737. (Intervento presentato al convegno IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) tenutosi a Singapore, Singapore nel 23-27 May 2022) [10.1109/ICASSP43922.2022.9747208].

Signal Compression via Neural Implicit Representations

Pistilli F.;Valsesia D.;Fracastoro G.;Magli E.
2022

Abstract

Existing end-to-end signal compression schemes using neural networks are largely based on an autoencoder-like structure, where a universal encoding function creates a compact latent space and the signal representation in this space is quantized and stored. Recently, advances from the field of 3D graphics have shown the possibility of building implicit representation networks, i.e., neural networks returning the value of a signal at a given query coordinate. In this paper, we propose using neural implicit representations as a novel paradigm for signal compression with neural networks, where the compact representation of the signal is defined by the very weights of the network. We discuss how this compression framework works, how to include priors in the design, and highlight interesting connections with transform coding. While the framework is general, and still lacks maturity, we already show very competitive performance on the task of compressing point cloud attributes, which is notoriously challenging due to the irregularity of the domain, but becomes trivial in the proposed framework.
File in questo prodotto:
File Dimensione Formato  
pistilli_icassp22.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 379.24 kB
Formato Adobe PDF
379.24 kB Adobe PDF Visualizza/Apri
Valsesia-Signal.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1 MB
Formato Adobe PDF
1 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2970817