The rapid explosion of online Cloud-based services has put more pressure on Cloud service providers for being in condition to satisfy the corresponding huge demand for computing power. While this challenge can be easily tackled by adding more resources, achieving high-power efficiency (i.e., the amount of data processed per watt) is far more complex. In recent years, to cope with the power-efficiency challenge, different hardware systems (e.g., GPUs, FPGAs, DSPs), each exposing different capabilities and programming models, became part of the standard data center hardware ecosystem. Besides their growing adoption on the core of the Cloud, also edge systems started to embrace heterogeneity to enable data processing closer to the data sources. To this end, lightweight versions of the hardware accelerators available in the data centers, along with number of dedicated ASICs appeared on the market. In this context, performances are not the only metrics to compare different platforms, but energy consumption is also relevant. This chapter discusses in depth the case study of a machine learning application implemented on a low-power, low-cost platform (i.e., the Parallella platform). Computer vision is a widely explored application domain, with large room for optimal implementations on edge-oriented devices, where the limited hardware resources become the main design constraint. First, machine learning approaches are presented, with a focus on Convolutional Neural Networks. Then, different design solutions are analysed, by highlighting the impact on performance and energy consumption.
Machine Learning on Low-Power Low-Cost Platforms: an Application Case Study / Scionti, A.; Terzo, O.; D'Amico, C.; Montrucchio, B.; Ferrero, R.. - STAMPA. - (2019), pp. 191-220.
Titolo: | Machine Learning on Low-Power Low-Cost Platforms: an Application Case Study |
Autori: | |
Data di pubblicazione: | 2019 |
Titolo del libro: | Heterogeneous Computing Architectures: Challenges and Vision |
Abstract: | The rapid explosion of online Cloud-based services has put more pressure on Cloud service provide...rs for being in condition to satisfy the corresponding huge demand for computing power. While this challenge can be easily tackled by adding more resources, achieving high-power efficiency (i.e., the amount of data processed per watt) is far more complex. In recent years, to cope with the power-efficiency challenge, different hardware systems (e.g., GPUs, FPGAs, DSPs), each exposing different capabilities and programming models, became part of the standard data center hardware ecosystem. Besides their growing adoption on the core of the Cloud, also edge systems started to embrace heterogeneity to enable data processing closer to the data sources. To this end, lightweight versions of the hardware accelerators available in the data centers, along with number of dedicated ASICs appeared on the market. In this context, performances are not the only metrics to compare different platforms, but energy consumption is also relevant. This chapter discusses in depth the case study of a machine learning application implemented on a low-power, low-cost platform (i.e., the Parallella platform). Computer vision is a widely explored application domain, with large room for optimal implementations on edge-oriented devices, where the limited hardware resources become the main design constraint. First, machine learning approaches are presented, with a focus on Convolutional Neural Networks. Then, different design solutions are analysed, by highlighting the impact on performance and energy consumption. |
ISBN: | 9780367023447 |
Appare nelle tipologie: | 2.1 Contributo in volume (Capitolo o Saggio) |
File in questo prodotto:
http://hdl.handle.net/11583/2777643