TY - JOUR
T1 - Double Double Descent
T2 - On Generalization Errors in Transfer Learning between Linear Regression Tasks
AU - Dar, Y
AU - Baraniuk, RG
PY - 2022/12/31
Y1 - 2022/12/31
N2 - We study the transfer learning process between two linear regression problems. An important and timely special case is when the regressors are overparameterized and perfectly interpolate their training data. We examine a parameter transfer mechanism whereby a subset of the parameters of the target task solution are constrained to the values learned for a related source task. We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i.e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the relation between the two tasks. Our nonasymptotic analysis shows that the generalization error of the target task follows a two-dimensional double descent trend (with respect to the number of free parameters in each of the tasks) that is controlled by the transfer learning factors. Our analysis points to specific cases where the transfer of parameters is beneficial as a substitute for extra overparameterization (i.e., additional free parameters in the target task). Specifically, we show that the usefulness of a transfer learning setting is fragile and depends on a delicate interplay among the set of transferred parameters, the relation between the tasks, and the true solution. We also demonstrate that overparameterized transfer learning is not necessarily more beneficial when the source task is closer or identical to the target task.
AB - We study the transfer learning process between two linear regression problems. An important and timely special case is when the regressors are overparameterized and perfectly interpolate their training data. We examine a parameter transfer mechanism whereby a subset of the parameters of the target task solution are constrained to the values learned for a related source task. We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i.e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the relation between the two tasks. Our nonasymptotic analysis shows that the generalization error of the target task follows a two-dimensional double descent trend (with respect to the number of free parameters in each of the tasks) that is controlled by the transfer learning factors. Our analysis points to specific cases where the transfer of parameters is beneficial as a substitute for extra overparameterization (i.e., additional free parameters in the target task). Specifically, we show that the usefulness of a transfer learning setting is fragile and depends on a delicate interplay among the set of transferred parameters, the relation between the tasks, and the true solution. We also demonstrate that overparameterized transfer learning is not necessarily more beneficial when the source task is closer or identical to the target task.
KW - Double descent
KW - Linear regression
KW - Overparameterized learning
KW - Transfer learning
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=bgu-pure&SrcAuth=WosAPI&KeyUT=WOS:000978251900010&DestLinkType=FullRecord&DestApp=WOS
U2 - https://doi.org/10.1137/22M1469559
DO - https://doi.org/10.1137/22M1469559
M3 - Article
SN - 2577-0187
SP - 1447
EP - 1472
JO - SIAM journal on mathematics of data science
JF - SIAM journal on mathematics of data science
ER -