Carnegie Mellon University
Browse

Variable Resolution Discretization for High-Accuracy Solutions of Optimal Control Problems

Download (425.93 kB)
journal contribution
posted on 1999-01-01, 00:00 authored by Remi Munos, Andrew W Moore
State abstraction is of central importance in reinforcement learning and Markov Decision Processes.This paper studies the case of variable resolution state abstraction for continuous-state, deterministic dynamic control problems in which near-optimal policies are required. We describe variable resolution policy and value function representations based on Kuhn triangulations embedded in a kd- tree. We then consider top-down approaches to choosing which cells to split in order to generate improved policies. We begin with local approaches based on value function properties and policy properties that use only features of individual cells in making splitting choices. Later, by introducing two new non-local measures, influence and variance, we derive a splitting criterion that allows one cell to efficiently take into account its impact on other cells when deciding whether to split. We evaluate the performance of a variety of splitting criteria on many benchmark problems (published on the web), paying careful attention to their number-of- cells versus closeness-to-optimality tradeoff curves.

History

Date

1999-01-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC