posted on 2001-01-01, 00:00authored byJ. Andrew Bagnell, Andrew Y. Ng, Jeff G. Schneider
The authors consider the fundamental problem of finding good policies in uncertain models. It is demonstrated that although the general problem of finding the best policy with respect to the worst
model is NP-hard, in the special case of a convex uncertainty set
the problem is tractable. A stochastic dynamic game is proposed,
and the security equilibrium solution of the game is shown to correspond to the value function under the worst model and the
optimal controller. The authors demonstrate that the uncertain model
approach can be used to solve a class of nearly Markovian Decision Problems, providing lower bounds on performance in stochastic models with higher-order interactions. The framework considered establishes connections between and generalizes paradigms of
stochastic optimal, mini-max, and H1/robust control. Applications are considered, including robustness in reinforcement learn-
ing, planning in nearly Markovian decision processes, and bounding
error due to sensor discretization in noisy, continuous state-spaces.