Document Type

Conference Proceeding

Publication Title

Proceedings of Machine Learning Research

Abstract

Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification. Recently, many methods have studied the representations of GNNs from the perspective of optimization goals and spectral graph theory. However, the feature space that dominates representation learning has not been systematically studied in graph neural networks. In this paper, we propose to fill this gap by analyzing the feature space of both spatial and spectral models. We decompose graph neural networks into determined feature spaces and trainable weights, providing the convenience of studying the feature space explicitly using matrix space analysis. In particular, we theoretically find that the feature space tends to be linearly correlated due to repeated aggregations. In this case, the feature space is bounded by the poor representation of shared weights or the limited dimensionality of node attributes in existing models, leading to poor performance. Motivated by these findings, we propose 1) feature subspaces flattening and 2) structural principal components to expand the feature space. Extensive experiments verify the effectiveness of our proposed more comprehensive feature space, with comparable inference time to the baseline, and demonstrate its efficient convergence capability.

First Page

33156

Last Page

33176

Publication Date

7-2023

Keywords

Feature space, Graph neural networks, Graph structured data, Learn+, Matrix spaces, Optimization goals, Performance, Spatial modelling, Spectral graph theory, Spectral modeling

Comments

Preprint version from arXiv

Uploaded on June 12, 2024

Share

COinS