Abstract
In this work we study the properties of deep neural networks (DNN) with random weights. We formally prove that these networks perform a distance-preserving embedding of the data. Based on this we then draw conclusions on the size of the training data and the networks’ structure. A longer version of this paper with more results and details can be found in (Giryes et al., 2015). In particular, we formally prove in (Giryes et al., 2015) that DNN with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data.
Original language | English |
---|---|
State | Published - 2015 |
Externally published | Yes |
Event | 3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States Duration: 7 May 2015 → 9 May 2015 |
Conference
Conference | 3rd International Conference on Learning Representations, ICLR 2015 |
---|---|
Country/Territory | United States |
City | San Diego |
Period | 7/05/15 → 9/05/15 |
All Science Journal Classification (ASJC) codes
- Education
- Linguistics and Language
- Language and Linguistics
- Computer Science Applications