Markov decision processes with burstiness constraints

Michal Golan, Nahum Shimkin

Research output: Contribution to journalArticlepeer-review

Abstract

We consider a Markov Decision Process (MDP), over a finite or infinite horizon, augmented by so-called (σ,ρ)-burstiness constraints. Such constraints, which had been introduced within the framework of network calculus, are meant to limit some additive quantity to a given rate over any time interval, plus a term which allows for occasional and limited bursts. We introduce this class of constraints for MDP models, and formulate the corresponding constrained optimization problems. Due to the burstiness constraints, constrained optimal policies are generally history-dependent. We use a recursive form of the constraints to define an augmented-state model, for which sufficiency of Markov or stationary policies is recovered and the standard theory may be applied, albeit over a larger state space. The analysis is mainly devoted to a characterization of feasible policies, followed by application to the constrained MDP optimization problem. A simple queuing example serves to illustrate some of the concepts and calculations involved.

Original languageEnglish
Pages (from-to)877-889
Number of pages13
JournalEuropean Journal of Operational Research
Volume312
Issue number3
DOIs
StatePublished - 1 Aug 2023

Keywords

  • Burstiness constraints
  • Constrained Markov decision processes
  • Dynamic programming

All Science Journal Classification (ASJC) codes

  • Information Systems and Management
  • General Computer Science
  • Modelling and Simulation
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Markov decision processes with burstiness constraints'. Together they form a unique fingerprint.

Cite this