On the Universality of Rotation Equivariant Point Cloud Networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Learning functions on point clouds has applications in many fields, including computer vision, computer graphics, physics, and chemistry. Recently, there has been a growing interest in neural architectures that are invariant or equivariant to all three shape-preserving transformations of point clouds: translation, rotation, and permutation. In this paper, we present a first study of the approximation power of these architectures. We first derive two sufficient conditions for an equivariant architecture to have the universal approximation property, based on a novel characterization of the space of equivariant polynomials. We then use these conditions to show that two recently suggested models (Thomas et al., 2018; Fuchs et al., 2020) are universal, and for devising two other novel universal architectures.

Original languageEnglish
Title of host publicationInternational Conference on Learning Representations
Number of pages22
StatePublished - 2021
Externally publishedYes
Event9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online
Duration: 3 May 20217 May 2021

Conference

Conference9th International Conference on Learning Representations, ICLR 2021
CityVirtual, Online
Period3/05/217/05/21

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'On the Universality of Rotation Equivariant Point Cloud Networks'. Together they form a unique fingerprint.

Cite this