Carnegie Mellon University
Browse
file.pdf (265.08 kB)

Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach

Download (265.08 kB)
journal contribution
posted on 2002-01-01, 00:00 authored by Christopher G. Atkeson, Jun Morimoto
A longstanding goal of reinforcement learning is to develop nonparametric representations of policies and value functions that support rapid learning without suffering from interference or the curse of dimensionality. We have developed a trajectory-based approach, in which policies and value functions are represented nonparametrically along trajectories. These trajectories, policies, and value functions are updated as the value function becomes more accurate or as a model of the task is updated. We have applied this approach to periodic tasks such as hopping and walking, which required handling discount factors and discontinuities in the task dynamics, and using function approximation to represent value functions at discontinuities. We also describe extensions of the approach to make the policies more robust to modeling error and sensor noise.

History

Date

2002-01-01

Usage metrics

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC