Carnegie Mellon University
Browse

Computing Equilibria of Dynamic Games

Download (496.66 kB)
journal contribution
posted on 2002-12-01, 00:00 authored by Kenneth L Judd, Sevin Yeltekin SleetSevin Yeltekin Sleet

In two influential papers, Abreu, Pearce and Stacchetti (APS) (1986, 1990) developed set-valued dynamic programming techniques for solving certain classes of repeated game. They showed that, for these games, the set of sequential equilibrium payoffs can be computed as a fixed point of an operator analogous to the Bellman operator in dynamic programming. These methods can be extended to cover a large class of dynamic games that arise naturally in industrial organization, macroeconomics, and public finance.1 In the dynamic case, the object of interest is a correspondence that maps a physical state variable to sets of equilibrium payoffs. APS-based methods imply that the equilibrium payoff correspondence is a fixed point of a monotone operator and can be obtained via an iteration of this operator. In principle, this iterative procedure can be implemented numerically to solve for the equilibrium payoff correspondence. However, its practical numerical implementation requires an approximation scheme for correspondences that is efficient and consistent with the underlying structure of the monotone operator. In the first part of this paper, we provide such a scheme.

History

Date

2002-12-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC