Abstract
Many machine learning tasks involve processing eigenvectors derived from data.
Especially valuable are Laplacian eigenvectors, which capture useful structural information about graphs and other geometric objects. However, ambiguities arise
when computing eigenvectors: for each eigenvector v, the sign flipped −v is also
an eigenvector. More generally, higher dimensional eigenspaces contain infinitely
many choices of eigenvector bases. In this work we introduce SignNet and BasisNet — new neural architectures that are invariant to all requisite symmetries and
hence process collections of eigenspaces in a principled manner. Our networks are
universal, i.e., they can approximate any continuous function of eigenvectors with
the proper invariances. They are also theoretically strong for graph representation
learning — they can provably approximate any spectral graph convolution, spectral invariants that go beyond message passing neural networks, and other graph
positional encodings. Experiments show the strength of our networks for learning
spectral graph filters and learning graph positional encodings
Especially valuable are Laplacian eigenvectors, which capture useful structural information about graphs and other geometric objects. However, ambiguities arise
when computing eigenvectors: for each eigenvector v, the sign flipped −v is also
an eigenvector. More generally, higher dimensional eigenspaces contain infinitely
many choices of eigenvector bases. In this work we introduce SignNet and BasisNet — new neural architectures that are invariant to all requisite symmetries and
hence process collections of eigenspaces in a principled manner. Our networks are
universal, i.e., they can approximate any continuous function of eigenvectors with
the proper invariances. They are also theoretically strong for graph representation
learning — they can provably approximate any spectral graph convolution, spectral invariants that go beyond message passing neural networks, and other graph
positional encodings. Experiments show the strength of our networks for learning
spectral graph filters and learning graph positional encodings
Original language | Undefined/Unknown |
---|---|
Title of host publication | ICLR 2022 Workshop on Geometrical and Topological Representation Learning |
Number of pages | 29 |
State | Published - 2022 |
Externally published | Yes |
Event | ICLR Workshop on Geometrical and Topological Representation Learning - Duration: 29 Apr 2022 → 29 Apr 2022 https://openreview.net/group?id=ICLR.cc/2022/Workshop/GTRL |
Conference
Conference | ICLR Workshop on Geometrical and Topological Representation Learning |
---|---|
Abbreviated title | GTRL |
Period | 29/04/22 → 29/04/22 |
Internet address |