Autonomous driving techniques frequently need the clustering and the classification of data coming from several input sensors, like cameras, radar and lidars. These sub-tasks need to be implemented in real-time in embedded on-board computing units. The trend for data classification and clustering in the signal processing community is moving towards machine learning (ML) algorithms. One of them, which plays a central role, is the k -nearest neighbors ( k -NN) algorithm. To meet stringent requirements in terms of real-time computing capability and circuit/memory complexity, ML accelerators are needed. Innovation is required in terms of computing arithmetic since classic integer numbers lead to low classification accuracy with respect to the needs of safety critical applications like autonomous driving. Instead, floating numbers require too much circuit and memory. To overcome these issues the paper shows that the use of a new format, called Posit, implemented in a new cppPosit software library, can lead to a k -NN implementation having the same accuracy of floats, but with halved bit-size. This means that a Posit Processing Unit (PPU) reduces by a factor higher than 2 the data transfer and storage complexity of ML accelerators. We also prove that a LUT-based complete tabulated implementation of a PPU for a 8-bit requires just 64 kB storage size, compliant with memory-constrained devices.
Novel Arithmetics to Accelerate Machine Learning Classifiers in Autonomous Driving Applications
Marco Cococcioni
Co-primo
;Federico Rossi
Co-primo
;Sergio Saponara
Co-primo
2019-01-01
Abstract
Autonomous driving techniques frequently need the clustering and the classification of data coming from several input sensors, like cameras, radar and lidars. These sub-tasks need to be implemented in real-time in embedded on-board computing units. The trend for data classification and clustering in the signal processing community is moving towards machine learning (ML) algorithms. One of them, which plays a central role, is the k -nearest neighbors ( k -NN) algorithm. To meet stringent requirements in terms of real-time computing capability and circuit/memory complexity, ML accelerators are needed. Innovation is required in terms of computing arithmetic since classic integer numbers lead to low classification accuracy with respect to the needs of safety critical applications like autonomous driving. Instead, floating numbers require too much circuit and memory. To overcome these issues the paper shows that the use of a new format, called Posit, implemented in a new cppPosit software library, can lead to a k -NN implementation having the same accuracy of floats, but with halved bit-size. This means that a Posit Processing Unit (PPU) reduces by a factor higher than 2 the data transfer and storage complexity of ML accelerators. We also prove that a LUT-based complete tabulated implementation of a PPU for a 8-bit requires just 64 kB storage size, compliant with memory-constrained devices.File | Dimensione | Formato | |
---|---|---|---|
Paper_ICECS_Cococcioni_Rossi_Ruffaldi_Saponara_revised.pdf
accesso aperto
Tipologia:
Documento in Pre-print
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
620.37 kB
Formato
Adobe PDF
|
620.37 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.