Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically afflicted by a distortion in the information flowing from distant nodes (i.e., over-squashing) that limit their ability to learn long-range dependencies. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. We focus on Anti-symmetric Deep Graph Networks (A-DGNs), a recently proposed neural architecture for learning from graphs. A-DGNs are designed based on stable and non-dissipative ordinary differential equations, with a key architectural design based on an anti-symmetric structure of the internal weights. In this paper, we investigate the merits of the resulting architectural bias by incorporating randomized internal connections in node embedding computations and by restricting the training algorithms to operate exclusively at the output layer. To empirically validate our approach, we conduct experiments on various graph benchmarks, demonstrating the effectiveness of the proposed approach in learning from graph data.

Non-dissipative Propagation by Randomized Anti-symmetric Deep Graph Networks

Gravina A.;Gallicchio C.;Bacciu D.
2025-01-01

Abstract

Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically afflicted by a distortion in the information flowing from distant nodes (i.e., over-squashing) that limit their ability to learn long-range dependencies. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. We focus on Anti-symmetric Deep Graph Networks (A-DGNs), a recently proposed neural architecture for learning from graphs. A-DGNs are designed based on stable and non-dissipative ordinary differential equations, with a key architectural design based on an anti-symmetric structure of the internal weights. In this paper, we investigate the merits of the resulting architectural bias by incorporating randomized internal connections in node embedding computations and by restricting the training algorithms to operate exclusively at the output layer. To empirically validate our approach, we conduct experiments on various graph benchmarks, demonstrating the effectiveness of the proposed approach in learning from graph data.
2025
9783031746420
9783031746437
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1313109
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact