posted on 2001-01-01, 00:00authored byJ. Andrew Bagnell, Jeff G. Schneider
Many control problems in the robotics field
can be cast as Partially Observed Markovian Decision
Problems (POMDPs), an optimal control formalism.
Finding optimal solutions to such problems in general,
however is known to be intractable. It has often been
observed that in practice, simple structured controllers
suffice for good sub-optimal control, and recent research
in the artificial intelligence community has focused on policy search methods as techniques for finding sub-optimal
controllers when such structured controllers do exist. Traditional model-based reinforcement learning algorithms
make a certainty equivalence assumption on their learned
models and calculate optimal policies for a maximum-likelihood Markovian model. In this work, we consider
algorithms that evaluate and synthesize controllers under distributions of Markovian models. Previous work has
demonstrated that algorithms that maximize mean reward
with respect to model uncertainty leads to safer and more
robust controllers. We consider briefy other performance
criterion that emphasize robustness and exploration in the
search for controllers, and note the relation with experiment design and active learning. To validate the power
of the approach on a robotic application we demonstrate
the presented learning control algorithm by
flying an autonomous helicopter. We show that the controller learned
is robust and delivers good performance in this real-world
domain.