Abstract
Robust optimization (RO) is one of the key paradigms for solving optimization problems affected by uncertainty. Two principal approaches for RO, the robust counterpart method and the adversarial approach, potentially lead to excessively large optimization problems. For that reason, first-order approaches, based on online convex optimization, have been proposed as alternatives for the case of large-scale problems. However, existing first-order methods are either stochastic in nature or involve a binary search for the optimal value. We show that this problem can also be solved with deterministic first-order algorithms based on a saddle-point Lagrangian reformulation that avoids both of these issues. Our approach recovers the other approaches’ O(1=ϵ2) convergence rate in the general case and offers an improved O(1=ϵ) rate for problems with constraints that are affine both in the decision and in the uncertainty. Experiment involving robust quadratic optimization demonstrates the numerical benefits of our approach.
Original language | English |
---|---|
Pages (from-to) | 557-581 |
Number of pages | 25 |
Journal | INFORMS Journal on Computing |
Volume | 37 |
Issue number | 3 |
DOIs | |
State | Published - May 2025 |
Keywords
- convergence analysis
- first order methods
- robust optimization
- saddle point
All Science Journal Classification (ASJC) codes
- Software
- Information Systems
- Computer Science Applications
- Management Science and Operations Research