Recurrent Neural Networks (RNNs) are state-of-the-art models for many machine learning tasks, such as language modeling and machine translation. Executing the inference phase of a RNN directly in edge nodes, rather than in the cloud, would provide benefits in terms of energy consumption, latency and network bandwidth, provided that models can be made efficient enough to run on energy-constrained embedded devices. To this end, we propose an algorithmic optimization for improving the energy efficiency of encoder-decoder RNNs. Our method operates on the Beam Width (BW), i.e. one of the parameters that most influences inference complexity, modulating it depending on the currently processed input based on a metric of the network's "confidence". Results on two different machine translation models show that our method is able to reduce the average BW by up to 33%, thus significantly reducing the inference execution time and energy consumption, while maintaining the same translation performance.

Dynamic Beam Width Tuning for Energy-Efficient Recurrent Neural Networks / Jahier Pagliari, Daniele; Panini, Francesco; Macii, Enrico; Poncino, Massimo. - ELETTRONICO. - (2019), pp. 69-74. (Intervento presentato al convegno Great Lakes Symposium on VLSI tenutosi a Tysons Corner (USA) nel May 2019) [10.1145/3299874.3317974].

Dynamic Beam Width Tuning for Energy-Efficient Recurrent Neural Networks

Jahier Pagliari, Daniele;Macii, Enrico;Poncino, Massimo
2019

Abstract

Recurrent Neural Networks (RNNs) are state-of-the-art models for many machine learning tasks, such as language modeling and machine translation. Executing the inference phase of a RNN directly in edge nodes, rather than in the cloud, would provide benefits in terms of energy consumption, latency and network bandwidth, provided that models can be made efficient enough to run on energy-constrained embedded devices. To this end, we propose an algorithmic optimization for improving the energy efficiency of encoder-decoder RNNs. Our method operates on the Beam Width (BW), i.e. one of the parameters that most influences inference complexity, modulating it depending on the currently processed input based on a metric of the network's "confidence". Results on two different machine translation models show that our method is able to reduce the average BW by up to 33%, thus significantly reducing the inference execution time and energy consumption, while maintaining the same translation performance.
2019
9781450362528
File in questo prodotto:
File Dimensione Formato  
main.pdf

accesso riservato

Descrizione: Articolo principale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 837.66 kB
Formato Adobe PDF
837.66 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
postprint.pdf

accesso aperto

Descrizione: Post Print
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 830.13 kB
Formato Adobe PDF
830.13 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2785759