Abstract
We consider the ability of deep neural networks to represent data that lies near a low-dimensional manifold in a high-dimensional space. We show that deep networks can efficiently extract the intrinsic, low-dimensional coordinates of such data. Specifically we show that the first two layers of a deep network can exactly embed points lying on a monotonic chain, a special type of piecewise linear manifold, mapping them to a low-dimensional Euclidean space. Remarkably, the network can do this using an almost optimal number of parameters. We also show that this network projects nearby points onto the manifold and then embeds them with little error. Experiments demonstrate that training with stochastic gradient descent can indeed find efficient representations similar to the one presented in this paper.
| Original language | English |
|---|---|
| Number of pages | 13 |
| State | Published - 2017 |
| Event | 5th International Conference on Learning Representations, ICLR 2017 - Toulon, France Duration: 24 Apr 2017 → 26 Apr 2017 |
Conference
| Conference | 5th International Conference on Learning Representations, ICLR 2017 |
|---|---|
| Country/Territory | France |
| City | Toulon |
| Period | 24/04/17 → 26/04/17 |
All Science Journal Classification (ASJC) codes
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics