TY - GEN
T1 - Training (overparametrized) neural networks in near-linear time
AU - van den Brand, Jan
AU - Peng, Binghui
AU - Song, Zhao
AU - Weinstein, Omri
N1 - Publisher Copyright: © Jan van den Brand, Binghui Peng, Zhao Song, and Omri Weinstein.
PY - 2021/2/1
Y1 - 2021/2/1
N2 - The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster second-order optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate (independent of the training batch size n), second-order algorithms incur a daunting slowdown in the cost per iteration (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [79, 23], yielding an O(mn2)-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width m. We show how to speed up the algorithm of [23], achieving an Oe(mn)-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension (mn) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an `2-regression problem, and then use a Fast-JL type dimension reduction to precondition the underlying Gram matrix in time independent of M, allowing to find a sufficiently good approximate solution via first-order conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra – which led to recent breakthroughs in convex optimization (ERM, LPs, Regression) – can be carried over to the realm of deep learning as well.
AB - The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster second-order optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate (independent of the training batch size n), second-order algorithms incur a daunting slowdown in the cost per iteration (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [79, 23], yielding an O(mn2)-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width m. We show how to speed up the algorithm of [23], achieving an Oe(mn)-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension (mn) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an `2-regression problem, and then use a Fast-JL type dimension reduction to precondition the underlying Gram matrix in time independent of M, allowing to find a sufficiently good approximate solution via first-order conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra – which led to recent breakthroughs in convex optimization (ERM, LPs, Regression) – can be carried over to the realm of deep learning as well.
KW - Deep learning theory
KW - Nonconvex optimization
UR - http://www.scopus.com/inward/record.url?scp=85108156230&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.ITCS.2021.63
DO - 10.4230/LIPIcs.ITCS.2021.63
M3 - منشور من مؤتمر
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 12th Innovations in Theoretical Computer Science Conference, ITCS 2021
A2 - Lee, James R.
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 12th Innovations in Theoretical Computer Science Conference, ITCS 2021
Y2 - 6 January 2021 through 8 January 2021
ER -