Sign and Basis Invariant Networks for Spectral Graph Representation Learning

Derek Lim, Joshua David Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, Stefanie Jegelka

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Many machine learning tasks involve processing eigenvectors derived from data.
Especially valuable are Laplacian eigenvectors, which capture useful structural information about graphs and other geometric objects. However, ambiguities arise
when computing eigenvectors: for each eigenvector v, the sign flipped −v is also
an eigenvector. More generally, higher dimensional eigenspaces contain infinitely
many choices of eigenvector bases. In this work we introduce SignNet and BasisNet — new neural architectures that are invariant to all requisite symmetries and
hence process collections of eigenspaces in a principled manner. Our networks are
universal, i.e., they can approximate any continuous function of eigenvectors with
the proper invariances. They are also theoretically strong for graph representation
learning — they can provably approximate any spectral graph convolution, spectral invariants that go beyond message passing neural networks, and other graph
positional encodings. Experiments show the strength of our networks for learning
spectral graph filters and learning graph positional encodings
Original languageUndefined/Unknown
Title of host publicationICLR 2022 Workshop on Geometrical and Topological Representation Learning
Number of pages29
StatePublished - 2022
Externally publishedYes
EventICLR Workshop on Geometrical and Topological Representation Learning -
Duration: 29 Apr 202229 Apr 2022
https://openreview.net/group?id=ICLR.cc/2022/Workshop/GTRL

Conference

ConferenceICLR Workshop on Geometrical and Topological Representation Learning
Abbreviated titleGTRL
Period29/04/2229/04/22
Internet address

Cite this