Graph neural networks (GNNs) have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this article, we investigate the recently proposed randomly wired architectures in the context of GNNs. Instead of building deeper networks by stacking many layers, we prove that employing a randomly wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths. We also provide extensive experimental evidence of the superior performance of randomly wired architectures over multiple tasks and five graph convolution definitions, using recent benchmarking frameworks that address the reliability of previous testing methodologies.

RAN-GNNs: Breaking the Capacity Limits of Graph Neural Networks / Valsesia, D.; Fracastoro, G.; Magli, E.. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-237X. - STAMPA. - (2023). [10.1109/TNNLS.2021.3118450]

RAN-GNNs: Breaking the Capacity Limits of Graph Neural Networks

Valsesia D.;Fracastoro G.;Magli E.
2023

Abstract

Graph neural networks (GNNs) have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this article, we investigate the recently proposed randomly wired architectures in the context of GNNs. Instead of building deeper networks by stacking many layers, we prove that employing a randomly wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths. We also provide extensive experimental evidence of the superior performance of randomly wired architectures over multiple tasks and five graph convolution definitions, using recent benchmarking frameworks that address the reliability of previous testing methodologies.
File in questo prodotto:
File Dimensione Formato  
RAN-GNNs_Breaking_the_Capacity_Limits_of_Graph_Neural_Networks.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 2.15 MB
Formato Adobe PDF
2.15 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
RandomNN.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 1.15 MB
Formato Adobe PDF
1.15 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2957247