Publications

Stats

View publication

Title The Logical Expressiveness of Graph Neural Networks
Authors Pablo Barceló, Mikael Monet, Egor Kostylev, Jorge Pérez, Juan Reutter, Juan-Pablo Silva
Publication date 2020
Abstract The ability of graph neural networks (GNNs) for
distinguishing
nodes in graphs has been recently characterized in terms of the
Weisfeiler-Lehman (WL) test for checking graph isomorphism. This
characterization, however, does not settle the issue of which Boolean node
classifiers (i.e., functions classifying nodes in graphs as true or false)
can be expressed by GNNs. We tackle this problem by focusing on Boolean
classifiers expressible as formulas in the logic FOC2, a well-studied
fragment of first order logic. FOC2 is tightly related to the WL test, and
hence to GNNs. We start by studying a popular class of GNNs, which we call
AC-GNNs, in which the features of each node in the graph are updated, in
successive layers, only in terms of the features of its neighbors. We show
that this class of GNNs is too weak to capture all FOC2 classifiers, and
provide a syntactic characterization of the largest subclass of FOC2
classifiers that can be captured by AC-GNNs. This subclass coincides with a
logic heavily used by the knowledge representation community. We then look
at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We
show that it suffices to add readout functions, which allow to update the
features of a node not only in terms of its neighbors, but also in terms of
a global attribute vector. We call GNNs of this kind ACR-GNNs. We
experimentally validate our findings showing that, on synthetic data
conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while
ACR-GNNs can generalize even to graphs of sizes not seen during
training.
Downloaded 7 times
Pages 1-9
Conference name International Conference on Learning Representations
PDF View PDF
Reference URL View reference page