Projection-free Online Exp-concave Optimization

Dan Garber, Ben Kretzu

Research output: Contribution to journalConference articlepeer-review

Abstract

We consider the setting of online convex optimization (OCO) with exp-concave losses. The best regret bound known for this setting is O(n log T), where n is the dimension and T is the number of prediction rounds (treating all other quantities as constants and assuming T is sufficiently large), and is attainable via the well-known Online Newton Step algorithm (ONS). However, ONS requires on each iteration to compute a projection (according to some matrix-induced norm) onto the feasible convex set, which is often computationally prohibitive in high-dimensional settings and when the feasible set admits a non-trivial structure. In this work we consider projection-free online algorithms for exp-concave and smooth losses, where by projection-free we refer to algorithms that rely only on the availability of a linear optimization oracle (LOO) for the feasible set, which in many applications of interest admits much more efficient implementations than a projection oracle. We present an LOO-based ONS-style algorithm, which using overall O(T) calls to a LOO, guarantees in worst case regret bounded by Oe(n2/3T2/3) (ignoring all quantities except for n, T). However, our algorithm is most interesting in an important and plausible low-dimensional data scenario: if the gradients (approximately) span a subspace of dimension at most ρ, ρ << n, the regret bound improves to Oe2/3T2/3), and by applying standard deterministic sketching techniques, both the space and average additional per-iteration runtime requirements are only O(ρn) (instead of O(n2)). This improves upon recently proposed LOO-based algorithms for OCO which, while having the same state-of-the-art dependence on the horizon T, suffer from regret/oracle complexity that scales with √n or worse.

Original languageEnglish
Pages (from-to)1259-1284
Number of pages26
JournalProceedings of Machine Learning Research
Volume195
StatePublished - 2023
Event36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India
Duration: 12 Jul 202315 Jul 2023

Keywords

  • exp-concave
  • linear optimization oracle
  • online convex optimization
  • online learning
  • projection-free

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Projection-free Online Exp-concave Optimization'. Together they form a unique fingerprint.

Cite this