posted on 2004-01-01, 00:00authored byJ. Andrew Bagnell
Decision making under uncertainty is a central problem in robotics and machine learning.
This thesis explores three fundamental and intertwined aspects of the problem of learning
to make decisions.
The first is the problem of uncertainty. Classical optimal control techniques typically
rely on perfect state information. Real world problems never enjoy such conditions. Perhaps
more critically, classical optimal control algorithms fail to degrade gracefully as this
assumption is violated. Closely tied to the problemof uncertainty is that of approximation.
In large scale problems, learning decisions inevitably requires approximation. The difficulties
of approximation inside the framework of optimal control are well-known. [Gordon,
1995]
Often, especially in robotics applications, we wish to operate learned controllers in
domains where failure has relatively serious consequences. It is important to ensure that
decision policies we generate are robust both to uncertainty in our models of systems and
to our inability to accurately capture true system dynamics.
We present new classes of algorithms that gracefully handle uncertainty, approximation,
and robustness. We pay attention to the computational aspects of both the problems
and algorithms developed. Finally, we provide case studies that serve as both motivation
for the techniques as well as illustrate their applicability.