Automatic modulation classification (AMC) is an essential and indispensable topic in the development of cognitive radios. It is the cornerstone of adaptive modulation and demodulation capabilities to perceive and understand surrounding environments and make corresponding decisions. In this paper, we propose a priori regularization method in deep learning (DL-PR) for guiding loss optimization during model training process. The regularization factor designed by the combination of inter-class confrontation factor, global and dimensional divergence can help increase the inter-class distance and reduce the intra-class distance of samples. While preserving the original information of received signals as much as possible, it makes full use of the prior knowledge in the signal transmission process and ultimately helps deep learning models to be well generalized on signals with various signal-to-noise ratios (SNRs). As far as we know, this is the first attempt to regularize deep learning models based on SNR distribution of samples to improve AMC accuracy. Moreover, it can be proved that priori regularization can be interpreted as implicit data augmentation and model ensemble methods. By comparing with a series of state-of-the-art AMC methods and different regularization techniques on the public dataset RadioML 2016.10a, experimental results of multiple deep learning models illustrate the superiority of DL-PR, including CNN with accuracy of 62.6% and inference time of 0.82 ms per signal, LSTM with 61.8% and 0.87 ms, and hybrid CNN–LSTM with 64.2% and 0.94 ms. In practical applications, DL-PR can be also easily applied to complex environments due to its robustness to hyper-parameters and SNR estimation.

DL-PR: Generalized automatic modulation classification method based on deep learning with priori regularization

Zheng Q.;Yu Z.;Elhanashi A.;Saponara S.
2023-01-01

Abstract

Automatic modulation classification (AMC) is an essential and indispensable topic in the development of cognitive radios. It is the cornerstone of adaptive modulation and demodulation capabilities to perceive and understand surrounding environments and make corresponding decisions. In this paper, we propose a priori regularization method in deep learning (DL-PR) for guiding loss optimization during model training process. The regularization factor designed by the combination of inter-class confrontation factor, global and dimensional divergence can help increase the inter-class distance and reduce the intra-class distance of samples. While preserving the original information of received signals as much as possible, it makes full use of the prior knowledge in the signal transmission process and ultimately helps deep learning models to be well generalized on signals with various signal-to-noise ratios (SNRs). As far as we know, this is the first attempt to regularize deep learning models based on SNR distribution of samples to improve AMC accuracy. Moreover, it can be proved that priori regularization can be interpreted as implicit data augmentation and model ensemble methods. By comparing with a series of state-of-the-art AMC methods and different regularization techniques on the public dataset RadioML 2016.10a, experimental results of multiple deep learning models illustrate the superiority of DL-PR, including CNN with accuracy of 62.6% and inference time of 0.82 ms per signal, LSTM with 61.8% and 0.87 ms, and hybrid CNN–LSTM with 64.2% and 0.94 ms. In practical applications, DL-PR can be also easily applied to complex environments due to its robustness to hyper-parameters and SNR estimation.
2023
Zheng, Q.; Tian, X.; Yu, Z.; Wang, H.; Elhanashi, A.; Saponara, S.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1248608
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 68
  • ???jsp.display-item.citation.isi??? 51
social impact