Efficient online linear optimization with approximation algorithms

Research output: Contribution to journalConference articlepeer-review

Abstract

We revisit the problem of online linear optimization in case the set of feasible actions is accessible through an approximated linear optimization oracle with a factor α multiplicative approximation guarantee. This setting is in particular interesting since it captures natural online extensions of well-studied offline linear optimization problems which are NP-hard, yet admit efficient approximation algorithms. The goal here is to minimize the α-regret which is the natural extension of the standard regret in online learning to this setting. We present new algorithms with significantly improved oracle complexity for both the full information and bandit variants of the problem. Mainly, for both variants, we present α-regret bounds of O(T-1/3), were T is the number of prediction rounds, using only O(log(T)) calls to the approximation oracle per iteration, on average. These are the first results to obtain both average oracle complexity of O(log(T)) (or even poly-logarithmic in T) and α-regret bound O(T-c) for a constant c > 0, for both variants.

Original languageEnglish
Pages (from-to)628-636
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2017-December
StatePublished - 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Efficient online linear optimization with approximation algorithms'. Together they form a unique fingerprint.

Cite this