Recent works have proven the feasibility of fast and accurate time series classification methods based on randomized convolutional kernels [5, 32]. Concerning graph-structured data, the majority of randomized graph neural networks are based on the Echo State Network paradigm in which single layers or the whole network present some form of recurrence [7, 8]. This paper aims to explore a simple form of a randomized graph neural network inspired by the success of randomized convolutions in the 1-dimensional domain. Our idea is pretty simple: implement a no-frills convolutional graph neural network and leave its weights untrained. Then, we aggregate the node representations with global pooling operators, obtaining an untrained graph-level representation. Since there is no training involved, computing such representation is extremely fast. We then apply a fast linear classifier to the obtained representations. We opted for LS-SVM since it is among the fastest classifiers available. We show that such a simple approach can obtain competitive predictive performance while being extremely efficient both at training and inference time.

An Untrained Neural Model for Fast and Accurate Graph Classification

Gallicchio C.;Sperduti A.
2023-01-01

Abstract

Recent works have proven the feasibility of fast and accurate time series classification methods based on randomized convolutional kernels [5, 32]. Concerning graph-structured data, the majority of randomized graph neural networks are based on the Echo State Network paradigm in which single layers or the whole network present some form of recurrence [7, 8]. This paper aims to explore a simple form of a randomized graph neural network inspired by the success of randomized convolutions in the 1-dimensional domain. Our idea is pretty simple: implement a no-frills convolutional graph neural network and leave its weights untrained. Then, we aggregate the node representations with global pooling operators, obtaining an untrained graph-level representation. Since there is no training involved, computing such representation is extremely fast. We then apply a fast linear classifier to the obtained representations. We opted for LS-SVM since it is among the fastest classifiers available. We show that such a simple approach can obtain competitive predictive performance while being extremely efficient both at training and inference time.
2023
978-3-031-44215-5
978-3-031-44216-2
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1222211
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact