Abstract
This paper studies the generalization error of invariant classifiers. In particular, we consider the common scenario where the classification task is invariant to certain transformations of the input, and that the classifier is constructed (or learned) to be invariant to these transformations. Our approach relies on factoring the input space into a product of a base space and a set of transformations. We show that whereas the generalization error of a non-invariant classifier is proportional to the complexity of the input space, the generalization error of an invariant classifier is proportional to the complexity of the base space. We also derive a set of sufficient conditions on the geometry of the base space and the set of transformations that ensure that the complexity of the base space is much smaller than the complexity of the input space. Our analysis applies to general classifiers such as convolutional neural networks. We demonstrate the implications of the developed theory for such classifiers with experiments on the MNIST and CIFAR-10 datasets.
Original language | English |
---|---|
State | Published - 2017 |
Event | 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 - Fort Lauderdale, United States Duration: 20 Apr 2017 → 22 Apr 2017 |
Conference
Conference | 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 |
---|---|
Country/Territory | United States |
City | Fort Lauderdale |
Period | 20/04/17 → 22/04/17 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Statistics and Probability