Policy gradients with variance related risk criteria

Aviv Tamar, Dotan Di Castro, Shie Mannor

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Managing risk in dynamic decision problems is of cardinal importance in many fields such as finance and process control. The most common approach to defining risk is through various variance related criteria such as the Sharpe Ratio or the standard deviation adjusted reward. It is known that optimizing many of the variance related risk criteria is NP-hard. In this paper we devise a framework for local policy gradient style algorithms for reinforcement learning for variance related criteria. Our starting point is a new formula for the variance of the cost-to-go in episodic tasks. Using this formula we develop policy gradient algorithms for criteria that involve both the expected cost and the variance of the cost. We prove the convergence of these algorithms to local minima and demonstrate their applicability in a portfolio planning problem.

Original languageEnglish
Title of host publicationProceedings of the 29th International Conference on Machine Learning, ICML 2012
Pages935-942
Number of pages8
StatePublished - 2012
Event29th International Conference on Machine Learning, ICML 2012 - Edinburgh, United Kingdom
Duration: 26 Jun 20121 Jul 2012

Publication series

NameProceedings of the 29th International Conference on Machine Learning, ICML 2012
Volume1

Conference

Conference29th International Conference on Machine Learning, ICML 2012
Country/TerritoryUnited Kingdom
CityEdinburgh
Period26/06/121/07/12

All Science Journal Classification (ASJC) codes

  • Human-Computer Interaction
  • Education

Fingerprint

Dive into the research topics of 'Policy gradients with variance related risk criteria'. Together they form a unique fingerprint.

Cite this