posted on 2010-12-01, 00:00authored byBrian D. Ziebart
Predicting human behavior from a small amount of training examples is a challenging machine
learning problem. In this thesis, we introduce the principle of maximum causal entropy, a general
technique for applying information theory to decision-theoretic, game-theoretic, and control
settings where relevant information is sequentially revealed over time. This approach guarantees
decision-theoretic performance by matching purposeful measures of behavior (Abbeel & Ng,
2004), and/or enforces game-theoretic rationality constraints (Aumann, 1974), while otherwise being
as uncertain as possible, which minimizes worst-case predictive log-loss (Gr¨unwald & Dawid,
2003).
We derive probabilistic models for decision, control, and multi-player game settings using this
approach. We then develop corresponding algorithms for efficient inference that include relaxations
of the Bellman equation (Bellman, 1957), and simple learning algorithms based on convex
optimization. We apply the models and algorithms to a number of behavior prediction tasks.
Specifically, we present empirical evaluations of the approach in the domains of vehicle route
preference modeling using over 100,000 miles of collected taxi driving data, pedestrian motion
modeling from weeks of indoor movement data, and robust prediction of game play in stochastic
multi-player games.
History
Date
2010-12-01
Degree Type
Dissertation
Department
Machine Learning
Degree Name
Doctor of Philosophy (PhD)
Advisor(s)
J. Andrew Bagnell,Anind K. Dey,Martial Hebert,Dieter Fox