First-Order Algorithms for Robust Optimization Problems via Convex-Concave Saddle-Point Lagrangian Reformulation

Krzysztof Postek, Shimrit Shtern

Research output: Contribution to journalArticlepeer-review

Abstract

Robust optimization (RO) is one of the key paradigms for solving optimization problems affected by uncertainty. Two principal approaches for RO, the robust counterpart method and the adversarial approach, potentially lead to excessively large optimization problems. For that reason, first-order approaches, based on online convex optimization, have been proposed as alternatives for the case of large-scale problems. However, existing first-order methods are either stochastic in nature or involve a binary search for the optimal value. We show that this problem can also be solved with deterministic first-order algorithms based on a saddle-point Lagrangian reformulation that avoids both of these issues. Our approach recovers the other approaches’ O(1=ϵ2) convergence rate in the general case and offers an improved O(1=ϵ) rate for problems with constraints that are affine both in the decision and in the uncertainty. Experiment involving robust quadratic optimization demonstrates the numerical benefits of our approach.

Original languageEnglish
Pages (from-to)557-581
Number of pages25
JournalINFORMS Journal on Computing
Volume37
Issue number3
DOIs
StatePublished - May 2025

Keywords

  • convergence analysis
  • first order methods
  • robust optimization
  • saddle point

All Science Journal Classification (ASJC) codes

  • Software
  • Information Systems
  • Computer Science Applications
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'First-Order Algorithms for Robust Optimization Problems via Convex-Concave Saddle-Point Lagrangian Reformulation'. Together they form a unique fingerprint.

Cite this