posted on 1998-01-01, 00:00authored byAvrim Blum, Yishay Monsour
Many situations involve repeatedly making decisions in an uncertain envi-
ronment: for instance, deciding what route to drive to work each day, or
repeated play of a game against an opponent with an unknown strategy. In
this chapter we describe learning algorithms with strong guarantees for set-
tings of this type, along with connections to game-theoretic equilibria when
all players in a system are simultaneously adapting in such a manner.
We begin by presenting algorithms for repeated play of a matrix game with
the guarantee that against any opponent, they will perform nearly as well as
the best fixed action in hindsight (also called the problem of combining expert
advice or minimizing external regret). In a zero-sum game, such algorithms
are guaranteed to approach or exceed the minimax value of the game, and
even provide a simple proof of the minimax theorem. We then turn to
algorithms that minimize an even stronger form of regret, known as internal
or swap regret. We present a general reduction showing how to convert any
algorithm for minimizing external regret to one that minimizes this stronger
form of regret as well. Internal regret is important because when all players
in a game minimize this stronger type of regret, the empirical distribution
of play is known to converge to correlated equilibrium.
The third part of this chapter explains a different reduction: how to con-
vert from the full information setting in which the action chosen by the
opponent is revealed after each time step, to the partial information (ban-
dit) setting, where at each time step only the payoff of the selected action
is observed (such as in routing), and still maintain a small external regret.
Finally, we end by discussing routing games in the Wardrop model, where
one can show that if all participants minimize their own external regret, then
overall traffic is guaranteed to converge to an approximate Nash Equilibrium.
This further motivates price-of-anarchy results.