Abstract
Recent work has shown the utility of developing machine learning models that respect the symmetries of eigenvectors.
These works promote sign invariance, since for any eigenvector
the negation
is also an eigenvector.
In this work, we demonstrate that sign equivariance is useful for applications such as building orthogonally equivariant models and link prediction. To obtain these benefits, we develop novel sign equivariant neural network architectures. These models are based on our analytic characterization of the sign equivariant polynomials and thus inherit provable expressiveness properties
These works promote sign invariance, since for any eigenvector
the negation
is also an eigenvector.
In this work, we demonstrate that sign equivariance is useful for applications such as building orthogonally equivariant models and link prediction. To obtain these benefits, we develop novel sign equivariant neural network architectures. These models are based on our analytic characterization of the sign equivariant polynomials and thus inherit provable expressiveness properties
| Original language | English |
|---|---|
| Title of host publication | ICLR 2023 Workshop on Physics for Machine Learning |
| State | Published - 2023 |
| Externally published | Yes |
| Event | ICLR 2023 Workshop on Physics for Machine Learning - Hybrid, Kigali, Rwanda Duration: 4 May 2023 → 4 May 2023 https://physics4ml.github.io/ |
Conference
| Conference | ICLR 2023 Workshop on Physics for Machine Learning |
|---|---|
| Country/Territory | Rwanda |
| City | Kigali |
| Period | 4/05/23 → 4/05/23 |
| Internet address |