Temporal difference methods for the variance of the reward to go

Aviv Tamar, Dotan Di Castro, Shie Mannor

نتاج البحث: نشر في مجلةمقالة من مؤنمرمراجعة النظراء


In this paper we extend temporal difference policy evaluation algorithms to performance criteria that include the variance of the cumulative reward. Such criteria are useful for risk management, and are important in domains such as finance and process control. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in a 4-dimensional continuous state space problem.

اللغة الأصليةالإنجليزيّة
الصفحات (من إلى)1532-1540
عدد الصفحات9
دوريةProceedings of Machine Learning Research
مستوى الصوت28
حالة النشرنُشِر - 2013
الحدث30th International Conference on Machine Learning, ICML 2013 - Atlanta, GA, الولايات المتّحدة
المدة: ١٦ يونيو ٢٠١٣٢١ يونيو ٢٠١٣

All Science Journal Classification (ASJC) codes

  • !!Human-Computer Interaction
  • !!Sociology and Political Science


أدرس بدقة موضوعات البحث “Temporal difference methods for the variance of the reward to go'. فهما يشكلان معًا بصمة فريدة.

قم بذكر هذا