ABSTRACT

Exploiting Multi-Step Sample Trajectories for Approximate Value Iteration   [open pdf - 540KB]

From the abstract "Approximate value iteration methods for reinforcement learning (RL) generalize experience from limited samples across large state-action spaces. The function approximators used in such methods typically introduce errors in value estimation which can harm the quality of the learned value functions. We present a new batch-mode, off-policy, approximate value iteration algorithm called Trajectory Fitted Q-Iteration (TFQI). This approach uses the sequential relationship between samples within a trajectory, a set of samples gathered sequentially from the problem domain, to lessen the adverse influence of approximation errors while deriving long-term value. We provide a detailed description of the FTQI approach and an empirical study that analyzes the impact of our method on two well-known RL benchmarks. Our experiments demonstrate this approach has significant benefits including: better learned policy performance, improved convergence, and some decreased sensitivity to the choice of function approximation."

Author:
Publisher:
Date:
2013-12
Copyright:
Public Domain
Retrieved From:
Defense Technical Information Center (DTIC): http://www.dtic.mil/dtic/
Format:
pdf
Media Type:
application/pdf
Source:
Proceedings European Conference on Machine Learning, Prague, Czech Republic 24 Sep 2013 - 28 Sep 2013.
URL:
Help with citations