TY - GEN
T1 - Consistent on-line off-policy evaluation
AU - Hallak, Assaf
AU - Mannor, Shie
N1 - Publisher Copyright: Copyright 2017 by the author(s).
PY - 2017
Y1 - 2017
N2 - The problem of on-line off-policy evaluation (OPE) has been actively studied in the last decade due to its importance both as a stand-alone problem and as a module in a policy improvement scheme. However, most Temporal Difference (TD) based solutions ignore the discrepancy between the stationary distribution of the behavior and target policies and its effect on the convergence limit when function approximation is applied. In this paper we propose the Consistent Off-Policy Temporal Difference (COP-TD(A, β)) algorithm that addresses this issue and reduces this bias at some computational expense. We show that COP-TD(A, B) can be designed to con-verge to the same value that would have been obtained by using on-policy TD(A) with the target policy. Subsequently, the proposed scheme leads to a related and promising heuristic we call log-COP-TD(A, β). Both algorithms have favorable empirical results to the current state of the art online OPE algorithms. Finally, our formulation sheds some new light on the recently proposed Emphatic TD learning.
AB - The problem of on-line off-policy evaluation (OPE) has been actively studied in the last decade due to its importance both as a stand-alone problem and as a module in a policy improvement scheme. However, most Temporal Difference (TD) based solutions ignore the discrepancy between the stationary distribution of the behavior and target policies and its effect on the convergence limit when function approximation is applied. In this paper we propose the Consistent Off-Policy Temporal Difference (COP-TD(A, β)) algorithm that addresses this issue and reduces this bias at some computational expense. We show that COP-TD(A, B) can be designed to con-verge to the same value that would have been obtained by using on-policy TD(A) with the target policy. Subsequently, the proposed scheme leads to a related and promising heuristic we call log-COP-TD(A, β). Both algorithms have favorable empirical results to the current state of the art online OPE algorithms. Finally, our formulation sheds some new light on the recently proposed Emphatic TD learning.
UR - http://www.scopus.com/inward/record.url?scp=85048419124&partnerID=8YFLogxK
M3 - منشور من مؤتمر
T3 - 34th International Conference on Machine Learning, ICML 2017
SP - 2197
EP - 2214
BT - 34th International Conference on Machine Learning, ICML 2017
T2 - 34th International Conference on Machine Learning, ICML 2017
Y2 - 6 August 2017 through 11 August 2017
ER -