The Contextual Graph Markov Model (CGMM) is a deep, unsupervised, and probabilistic model for graphs that is trained incrementally on a layerby-layer basis. As with most Deep Graph Networks, an inherent limitation is the need to perform an extensive model selection to choose the proper size of each layer's latent representation. In this paper, we address this problem by introducing the Infinite Contextual Graph Markov Model ( ICGMM), the first deep Bayesian nonparametric model for graph learning. During training, ICGMM can adapt the complexity of each layer to better fit the underlying data distribution. On 8 graph classification tasks, we show that ICGMM: i) successfully recovers or improves CGMM's performances while reducing the hyperparameters' search space; ii) performs comparably to most end-to-end supervised methods. The results include studies on the importance of depth, hyper-parameters, and compression of the graph embeddings. We also introduce a novel approximated inference procedure that better deals with larger graph topologies.

The Infinite Contextual Graph Markov Model

Castellana, D;Errica, F;Bacciu, D;Micheli, A
2022-01-01

Abstract

The Contextual Graph Markov Model (CGMM) is a deep, unsupervised, and probabilistic model for graphs that is trained incrementally on a layerby-layer basis. As with most Deep Graph Networks, an inherent limitation is the need to perform an extensive model selection to choose the proper size of each layer's latent representation. In this paper, we address this problem by introducing the Infinite Contextual Graph Markov Model ( ICGMM), the first deep Bayesian nonparametric model for graph learning. During training, ICGMM can adapt the complexity of each layer to better fit the underlying data distribution. On 8 graph classification tasks, we show that ICGMM: i) successfully recovers or improves CGMM's performances while reducing the hyperparameters' search space; ii) performs comparably to most end-to-end supervised methods. The results include studies on the importance of depth, hyper-parameters, and compression of the graph embeddings. We also introduce a novel approximated inference procedure that better deals with larger graph topologies.
File in questo prodotto:
File Dimensione Formato  
castellana22a.pdf

accesso aperto

Tipologia: Versione finale editoriale
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 588.44 kB
Formato Adobe PDF
588.44 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1187287
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact