Abstract
In this paper, we propose the Bi-Sub-Gradient (Bi-SG) method, which is a generalization of the classical sub-gradient method to the setting of convex bi-level optimization problems. This is a first-order method that is very easy to implement in the sense that it requires only a computation of the associated proximal mapping or a sub-gradient of the outer nonsmooth objective function, in addition to a proximal gradient step on the inner optimization problem. We show, under very mild assumptions, that Bi-SG tackles bi-level optimization problems and achieves sublinear rates in terms of both the inner and outer objective functions. Moreover, if the outer objective function is additionally strongly convex (still could be nonsmooth), the outer rate can be improved to a linear rate. Last, we prove that the distance of the generated sequence to the set of optimal solutions of the bi-level problem converges to zero.
Original language | English |
---|---|
Pages (from-to) | 3114-3142 |
Number of pages | 29 |
Journal | SIAM Journal on Optimization |
Volume | 33 |
Issue number | 4 |
DOIs | |
State | Published - 2023 |
Keywords
- bi-level optimization
- convex problems
- nonsmooth optimization
- proximal mapping
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science
- Applied Mathematics