Random Forests (RFs) are widely used Machine Learning models in low-power embedded devices, due to their hardware friendly operation and high accuracy on practically relevant tasks. The accuracy of a RF often increases with the number of internal weak learners (decision trees), but at the cost of a proportional increase in inference latency and energy consumption. Such costs can be mitigated considering that, in most applications, inputs are not all equally difficult to classify. Therefore, a large RF is often necessary only for (few) hard inputs, and wasteful for easier ones. In this work, we propose an early-stopping mechanism for RFs, which terminates the inference as soon as a high-enough classification confidence is reached, reducing the number of weak learners executed for easy inputs. The early-stopping confidence threshold can be controlled at runtime, in order to favor either energy saving or accuracy. We apply our method to three different embedded classification tasks, on a single-core RISC-V microcontroller, achieving an energy reduction from 38% to more than 90% with a drop of less than 0.5% in accuracy. We also show that our approach outperforms previous adaptive ML methods for RFs.
Adaptive Random Forests for Energy-Efficient Inference on Microcontrollers / Daghero, Francesco; Burrello, Alessio; Xie, Chen; Benini, Luca; Calimera, Andrea; Macii, Enrico; Poncino, Massimo; Jahier Pagliari, Daniele. - ELETTRONICO. - (2021), pp. 1-6. (Intervento presentato al convegno 29th IFIP/IEEE International Conference on Very Large Scale Integration, VLSI-SoC 2021 tenutosi a Virtual Conference nel 2021) [10.1109/VLSI-SoC53125.2021.9606986].
Adaptive Random Forests for Energy-Efficient Inference on Microcontrollers
Daghero, Francesco;Burrello, Alessio;Xie, Chen;Benini, Luca;Calimera, Andrea;Macii, Enrico;Poncino, Massimo;Jahier Pagliari, Daniele
2021
Abstract
Random Forests (RFs) are widely used Machine Learning models in low-power embedded devices, due to their hardware friendly operation and high accuracy on practically relevant tasks. The accuracy of a RF often increases with the number of internal weak learners (decision trees), but at the cost of a proportional increase in inference latency and energy consumption. Such costs can be mitigated considering that, in most applications, inputs are not all equally difficult to classify. Therefore, a large RF is often necessary only for (few) hard inputs, and wasteful for easier ones. In this work, we propose an early-stopping mechanism for RFs, which terminates the inference as soon as a high-enough classification confidence is reached, reducing the number of weak learners executed for easy inputs. The early-stopping confidence threshold can be controlled at runtime, in order to favor either energy saving or accuracy. We apply our method to three different embedded classification tasks, on a single-core RISC-V microcontroller, achieving an energy reduction from 38% to more than 90% with a drop of less than 0.5% in accuracy. We also show that our approach outperforms previous adaptive ML methods for RFs.File | Dimensione | Formato | |
---|---|---|---|
Adaptive_Random_Forests_for_Energy-Efficient_Inference_on_Microcontrollers.pdf
accesso riservato
Descrizione: Articolo principale (versione editoriale)
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
660.98 kB
Formato
Adobe PDF
|
660.98 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Adaptive_Random_Forests_for_Energy-Efficient_Inference_on_Microcontrollers-POSTPRINT.pdf.pdf
accesso aperto
Descrizione: Articolo principale (post-print)
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Pubblico - Tutti i diritti riservati
Dimensione
664.67 kB
Formato
Adobe PDF
|
664.67 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2971285