Abstract
Dropout is a popular regularization technique in deep learning. Yet, the reason for its success is still not fully understood. This paper provides a new interpretation of Dropout from a frame theory perspective. By drawing a connection to recent developments in analog channel coding, we suggest that for a certain family of autoencoders with a linear encoder, optimizing the encoder with dropout regularization leads to an equiangular tight frame (ETF). Since this optimization is non-convex, we add another regularization that promotes such structures by minimizing the cross-correlation between filters in the network. We demonstrate its applicability in convolutional and fully connected layers in both feed-forward and recurrent networks. All these results suggest that there is indeed a relationship between dropout and ETF structure of the regularized linear operations.
Original language | English |
---|---|
State | Published - 2020 |
Event | 31st British Machine Vision Conference, BMVC 2020 - Virtual, Online Duration: 7 Sep 2020 → 10 Sep 2020 |
Conference
Conference | 31st British Machine Vision Conference, BMVC 2020 |
---|---|
City | Virtual, Online |
Period | 7/09/20 → 10/09/20 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Vision and Pattern Recognition