Abstract
Regularization aims to improve prediction performance by trading an increase in training error for better agreement between training and prediction errors, which is often captured through decreased degrees of freedom. In this paper we give examples which show that regularization can increase the degrees of freedom in common models, including the lasso and ridge regression. In such situations, both training error and degrees of freedom increase, making the regularization inherently without merit. Two important scenarios are described where the expected reduction in degrees of freedom is guaranteed: all symmetric linear smoothers and convex constrained linear regression models like ridge regression and the lasso, when compared to unconstrained linear regression.
| Original language | English |
|---|---|
| Pages (from-to) | 771-784 |
| Number of pages | 14 |
| Journal | Biometrika |
| Volume | 101 |
| Issue number | 4 |
| DOIs | |
| State | Published - 1 Dec 2014 |
Keywords
- Degrees of freedom
- Model selection
- Optimism
- Regularization
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- General Mathematics
- Agricultural and Biological Sciences (miscellaneous)
- General Agricultural and Biological Sciences
- Statistics, Probability and Uncertainty
- Applied Mathematics
Fingerprint
Dive into the research topics of 'When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver