In recent years, research in the space community has shown a growing interest in the deployment of Deep Neural Networks on-board satellites, mostly driven by system miniaturization and commercial competition. Field Programmable Gate Arrays (FPGAs) have proven to be competitive accelerators for these algorithms and works proposing methods for automating the design on these devices have acquired relevance. The common purpose is to enable a wide range of users without specific skills to accelerate DNN models on FPGA with reduced development times. This paper describes the process of improving FPG-AI, a novel technology-independent toolflow that efficiently automates DNN deployment on FPGAs belonging to different vendors and with desperate resource budgets. The proposed techniques are meant to be a first step towards the extension of FPG-AI to Recurrent Neural Networks (RNNs), a subset of DNNs designed for processing sequences of data. As a first step, we discuss model compression strategies for enabling energy-efficient hardware acceleration of Gated Recurrent Units (GRUs), one of the most popular RNN layers. The article then presents a GRU-based hardware accelerator designed to be compliant with the identified quantization algorithm. The architecture features an optimized dataflow to enable hardware reuse over time and maximize resource efficiency without affecting the performance of the architecture. Preliminary results are reported for a state-of-the-art RNN-based model for Fault Detection Isolation and Recovery (FDIR) onboard the satellite to evaluate the proposed methods. A comparative analysis is also performed with the NVIDIA Jetson Nano Embedded GPU to characterize our accelerator with respect to a state-of-the-art platform for RNN inference.

Towards the Extension of FPG-AI Toolflow to RNN Deployment on FPGAs for On-board Satellite Applications

Pacini T.;Rapuano E.;Tuttobene L.;Nannipieri P.;Fanucci L.;
2023-01-01

Abstract

In recent years, research in the space community has shown a growing interest in the deployment of Deep Neural Networks on-board satellites, mostly driven by system miniaturization and commercial competition. Field Programmable Gate Arrays (FPGAs) have proven to be competitive accelerators for these algorithms and works proposing methods for automating the design on these devices have acquired relevance. The common purpose is to enable a wide range of users without specific skills to accelerate DNN models on FPGA with reduced development times. This paper describes the process of improving FPG-AI, a novel technology-independent toolflow that efficiently automates DNN deployment on FPGAs belonging to different vendors and with desperate resource budgets. The proposed techniques are meant to be a first step towards the extension of FPG-AI to Recurrent Neural Networks (RNNs), a subset of DNNs designed for processing sequences of data. As a first step, we discuss model compression strategies for enabling energy-efficient hardware acceleration of Gated Recurrent Units (GRUs), one of the most popular RNN layers. The article then presents a GRU-based hardware accelerator designed to be compliant with the identified quantization algorithm. The architecture features an optimized dataflow to enable hardware reuse over time and maximize resource efficiency without affecting the performance of the architecture. Preliminary results are reported for a state-of-the-art RNN-based model for Fault Detection Isolation and Recovery (FDIR) onboard the satellite to evaluate the proposed methods. A comparative analysis is also performed with the NVIDIA Jetson Nano Embedded GPU to characterize our accelerator with respect to a state-of-the-art platform for RNN inference.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1223372
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact