The increase in computational power of embedded devices and the latency demands of novel applications brought a paradigm shift on how and where the computation is performed. Although AI inference is slowly moving from the cloud to end-devices with limited resources, time-centric recurrent networks like Long-Short Term Memory remain too complex to be transferred on embedded devices without extreme simplifications and limiting the performance of many notable applications. To solve this issue, the Reservoir Computing paradigm proposes sparse, untrained non-linear networks, the Reservoir, that can embed temporal relations without some of the hindrances of Recurrent Neural Networks training, and with a lower memory occupation. Echo State Networks (ESN) and Liquid State Machines are the most notable examples. In this scenario, we propose EchoBay, a comprehensive C++ library for ESN design and training. EchoBay is architecture-agnostic to guarantee maximum performance on different devices (whether embedded or not), and it offers the possibility to optimize and tailor an ESN on a particular case study, reducing at the minimum the effort required on the user side. This can be done thanks to the Bayesian Optimization (BO) process, which efficiently and automatically searches hyper-parameters that maximize a fitness function. Additionally, we designed different optimization techniques that take in consideration resource constraints of the device to minimize memory footprint and inference time. Our results in different scenarios show an average speed-up in training time of 119x compared to Grid and Random search of hyper-parameters, a decrease of 94% of trained models size and 95% in inference time, maintaining comparable performance for the given task. The EchoBay library is Open Source and publicly available at https://github.com/necst/Echobay.

EchoBay: Design and Optimization of Echo State Networks under Memory and Time Constraints

Gallicchio C.;Micheli A.
2020-01-01

Abstract

The increase in computational power of embedded devices and the latency demands of novel applications brought a paradigm shift on how and where the computation is performed. Although AI inference is slowly moving from the cloud to end-devices with limited resources, time-centric recurrent networks like Long-Short Term Memory remain too complex to be transferred on embedded devices without extreme simplifications and limiting the performance of many notable applications. To solve this issue, the Reservoir Computing paradigm proposes sparse, untrained non-linear networks, the Reservoir, that can embed temporal relations without some of the hindrances of Recurrent Neural Networks training, and with a lower memory occupation. Echo State Networks (ESN) and Liquid State Machines are the most notable examples. In this scenario, we propose EchoBay, a comprehensive C++ library for ESN design and training. EchoBay is architecture-agnostic to guarantee maximum performance on different devices (whether embedded or not), and it offers the possibility to optimize and tailor an ESN on a particular case study, reducing at the minimum the effort required on the user side. This can be done thanks to the Bayesian Optimization (BO) process, which efficiently and automatically searches hyper-parameters that maximize a fitness function. Additionally, we designed different optimization techniques that take in consideration resource constraints of the device to minimize memory footprint and inference time. Our results in different scenarios show an average speed-up in training time of 119x compared to Grid and Random search of hyper-parameters, a decrease of 94% of trained models size and 95% in inference time, maintaining comparable performance for the given task. The EchoBay library is Open Source and publicly available at https://github.com/necst/Echobay.
2020
Cerina, L.; Santambrogio, M. D.; Franco, G.; Gallicchio, C.; Micheli, A.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1065888
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 7
social impact