Bandits with Partially Observable Confounded Data

Guy Tennenholtz, Uri Shalit, Shie Mannor, Yonathan Efroni

Research output: Contribution to conferencePaperpeer-review

Abstract

We study linear contextual bandits with access to a large, confounded, offline dataset that was sampled from some fixed policy. We show that this problem is closely related to a variant of the bandit problem with side information. We construct a linear bandit algorithm that takes advantage of the projected information, and prove regret bounds. Our results demonstrate the ability to take advantage of confounded offline data. Particularly, we prove regret bounds that improve current bounds by a factor related to the visible dimensionality of the contexts in the data. Our results indicate that confounded offline data can significantly improve online learning algorithms. Finally, we demonstrate various characteristics of our approach through synthetic simulations.

Original languageEnglish
Pages430-439
Number of pages10
StatePublished - 2021
Externally publishedYes
Event37th Conference on Uncertainty in Artificial Intelligence, UAI 2021 - Virtual, Online
Duration: 27 Jul 202130 Jul 2021

Conference

Conference37th Conference on Uncertainty in Artificial Intelligence, UAI 2021
CityVirtual, Online
Period27/07/2130/07/21

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Bandits with Partially Observable Confounded Data'. Together they form a unique fingerprint.

Cite this