We present a new version of the additive BCJR algorithm based on a recurrent neural network whose structure reflects an underlying trellis diagram. Starting from a matrix version of the equations of the additive BCJR algorithm, we derive the equivalent trainable recurrent neural network model, named Recurrent Neural Network (RNN) BCJR. The RNN BCJR consists of a linear layer to form the edge metrics from the state and input metrics, followed by a SOFTMAX/max* layer to marginalize the edge metrics back to the state and output spaces. We derive the recursions for delta propagation to train the two-layer mixing matrices from the output cost function. Unlike the previous approaches, the proposed RNN BCJR can completely replace the BCJR and is trainable from the cost functions of the outputs. The trained RNN BCJR achieves the same optimal performance as the BCJR when the model is known but at the same time can adapt itself to model mismatch, thus outperforming BCJR.

RNN BCJR: a fully trainable version of the additive BCJR algorithm / Montorsi, Guido; Ripani, Barbara. - ELETTRONICO. - (2023). (Intervento presentato al convegno International Conference on Communications (ICC) tenutosi a Rome, Italy nel 28 May 2023 - 01 June 2023) [10.1109/ICC45041.2023.10279315].

RNN BCJR: a fully trainable version of the additive BCJR algorithm

Montorsi, Guido;Ripani, Barbara
2023

Abstract

We present a new version of the additive BCJR algorithm based on a recurrent neural network whose structure reflects an underlying trellis diagram. Starting from a matrix version of the equations of the additive BCJR algorithm, we derive the equivalent trainable recurrent neural network model, named Recurrent Neural Network (RNN) BCJR. The RNN BCJR consists of a linear layer to form the edge metrics from the state and input metrics, followed by a SOFTMAX/max* layer to marginalize the edge metrics back to the state and output spaces. We derive the recursions for delta propagation to train the two-layer mixing matrices from the output cost function. Unlike the previous approaches, the proposed RNN BCJR can completely replace the BCJR and is trainable from the cost functions of the outputs. The trained RNN BCJR achieves the same optimal performance as the BCJR when the model is known but at the same time can adapt itself to model mismatch, thus outperforming BCJR.
2023
978-1-5386-7462-8
File in questo prodotto:
File Dimensione Formato  
Ripani-RNN BCJR.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 619.52 kB
Formato Adobe PDF
619.52 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2981777