Abstract
As self-interested individuals (“agents”) make decisions over time, they utilize information revealed by other agents in the past and produce information that may help agents in the future. This phenomenon is common in a wide range of scenarios in the Internet economy, as well as in medical decisions. Each agent would like to exploit: select the best action given the current information, but would prefer the previous agents to explore: try out various alternatives to collect information. A social planner, by means of a carefully designed recommendation policy, can incentivize the agents to balance the exploration and exploitation so as to maximize social welfare. We model the planner's recommendation policy as a multiarm bandit algorithm under incentive-compatibility constraints induced by agents' Bayesian priors. We design a bandit algorithm which is incentive-compatible and has asymptotically optimal performance, as expressed by regret. Further, we provide a black-box reduction from an arbitrary multiarm bandit algorithm to an incentive-compatible one, with only a constant multiplicative increase in regret. This reduction works for very general bandit settings that incorporate contexts and arbitrary partial feedback.
Original language | English |
---|---|
Pages (from-to) | 1132-1161 |
Number of pages | 30 |
Journal | Operations Research |
Volume | 68 |
Issue number | 4 |
DOIs | |
State | Published - Jul 2020 |
Keywords
- Bayesian incentive-compatibility
- Mechanism design
- Multiarmed bandits
- Regret
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Management Science and Operations Research