posted on 2002-01-01, 00:00authored byChristopher G. Atkeson, Jun Morimoto
A longstanding goal of reinforcement learning is to develop nonparametric
representations of policies and value functions that support
rapid learning without suffering from interference or the curse of dimensionality.
We have developed a trajectory-based approach, in which
policies and value functions are represented nonparametrically along trajectories.
These trajectories, policies, and value functions are updated as
the value function becomes more accurate or as a model of the task is updated.
We have applied this approach to periodic tasks such as hopping
and walking, which required handling discount factors and discontinuities
in the task dynamics, and using function approximation to represent
value functions at discontinuities. We also describe extensions of the approach
to make the policies more robust to modeling error and sensor
noise.