Recurrent Neural Networks (RNNs) are state-of-the-art models for many machine learning tasks, such as language modeling and machine translation. Executing the inference phase of a RNN directly in edge nodes, rather than in the cloud, would provide benefits in terms of energy consumption, latency and network bandwidth, provided that models can be made efficient enough to run on energy-constrained embedded devices. To this end, we propose an algorithmic optimization for improving the energy efficiency of encoder-decoder RNNs. Our method operates on the Beam Width (BW), i.e. one of the parameters that most influences inference complexity, modulating it depending on the currently processed input based on a metric of the network's "confidence". Results on two different machine translation models show that our method is able to reduce the average BW by up to 33%, thus significantly reducing the inference execution time and energy consumption, while maintaining the same translation performance.
Dynamic Beam Width Tuning for Energy-Efficient Recurrent Neural Networks / Jahier Pagliari, Daniele; Panini, Francesco; Macii, Enrico; Poncino, Massimo. - ELETTRONICO. - (2019), pp. 69-74. ((Intervento presentato al convegno Great Lakes Symposium on VLSI tenutosi a Tysons Corner (USA) nel May 2019 [10.1145/3299874.3317974].
Titolo: | Dynamic Beam Width Tuning for Energy-Efficient Recurrent Neural Networks | |
Autori: | ||
Data di pubblicazione: | 2019 | |
Abstract: | Recurrent Neural Networks (RNNs) are state-of-the-art models for many machine learning tasks, suc...h as language modeling and machine translation. Executing the inference phase of a RNN directly in edge nodes, rather than in the cloud, would provide benefits in terms of energy consumption, latency and network bandwidth, provided that models can be made efficient enough to run on energy-constrained embedded devices. To this end, we propose an algorithmic optimization for improving the energy efficiency of encoder-decoder RNNs. Our method operates on the Beam Width (BW), i.e. one of the parameters that most influences inference complexity, modulating it depending on the currently processed input based on a metric of the network's "confidence". Results on two different machine translation models show that our method is able to reduce the average BW by up to 33%, thus significantly reducing the inference execution time and energy consumption, while maintaining the same translation performance. | |
ISBN: | 9781450362528 | |
Appare nelle tipologie: | 4.1 Contributo in Atti di convegno |
File in questo prodotto:
File | Descrizione | Tipologia | Licenza | |
---|---|---|---|---|
main.pdf | Articolo principale | 2a Post-print versione editoriale / Version of Record | Non Pubblico - Accesso privato/ristretto | Administrator Richiedi una copia |
postprint.pdf | Post Print | 2. Post-print / Author's Accepted Manuscript | PUBBLICO - Tutti i diritti riservati | Visibile a tuttiVisualizza/Apri |
http://hdl.handle.net/11583/2785759