In recent years, Convolutional Neural Networks (CNNs) have demonstrated outstanding results in several emerging classification tasks. The high-quality predictions are often achieved with computationally intensive workloads that hinder the hardware acceleration of these models at the edge. Field Programmable Gate Arrays (FPGAs) have proven to be energy efficient platforms for the execution of these algorithms and works proposing methods for automating the design on these devices have acquired relevance. The common purpose is to enable a wide range of users without specific skills to accelerate CNNs on FPGAs with reduced development times. In this paper, we present FPG-AI, a technology-independent toolflow for automating the deployment of CNNs on FPGA. The framework combines the use of model compression strategies with a fully handcrafted Hardware Description Languages (HDL)-based accelerator that poses no limit on device portability. On top of that, an automation process merges the two design spaces to define an end-to-end and ready-to-use tool. Experimental results are reported for reference models extracted from the literature (LeNet, NiN, VGG16, MobileNet-V1) on multiple classification datasets (MNIST, CIFAR10, ImageNet). To prove the technology independence of FPG-AI, we characterize the toolflow on devices with heterogeneous resource budgets belonging to different vendors (Xilinx, Intel, and Microsemi). Comparison with state-of-the-art work confirms the unmatched device portability of FPG-AI and shows performance metrics in line with the literature.

FPG-AI: A Technology-Independent Framework for the Automation of CNN Deployment on FPGAs

Pacini, T;Rapuano, E;Fanucci, L
2023-01-01

Abstract

In recent years, Convolutional Neural Networks (CNNs) have demonstrated outstanding results in several emerging classification tasks. The high-quality predictions are often achieved with computationally intensive workloads that hinder the hardware acceleration of these models at the edge. Field Programmable Gate Arrays (FPGAs) have proven to be energy efficient platforms for the execution of these algorithms and works proposing methods for automating the design on these devices have acquired relevance. The common purpose is to enable a wide range of users without specific skills to accelerate CNNs on FPGAs with reduced development times. In this paper, we present FPG-AI, a technology-independent toolflow for automating the deployment of CNNs on FPGA. The framework combines the use of model compression strategies with a fully handcrafted Hardware Description Languages (HDL)-based accelerator that poses no limit on device portability. On top of that, an automation process merges the two design spaces to define an end-to-end and ready-to-use tool. Experimental results are reported for reference models extracted from the literature (LeNet, NiN, VGG16, MobileNet-V1) on multiple classification datasets (MNIST, CIFAR10, ImageNet). To prove the technology independence of FPG-AI, we characterize the toolflow on devices with heterogeneous resource budgets belonging to different vendors (Xilinx, Intel, and Microsemi). Comparison with state-of-the-art work confirms the unmatched device portability of FPG-AI and shows performance metrics in line with the literature.
2023
Pacini, T; Rapuano, E; Fanucci, L
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1176810
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact