Statistical Game Theory
Game theory and statistics are two huge scientific disciplines that have played a significant role in the development of a wide variety of fields, including
computer science, natural sciences, and social sciences. Traditionally, game theory has been used for decision making in strategic environments where multiple agents interact with each other. Statistics, on the other hand, is traditionally used for reasoning in non-adversarial settings where the samples are assumed to be generated by some stationary non-reactive source. Due to the contrasting
settings in which game theory and statistics are often studied, these two disciplines have traditionally been regarded as disparate research areas. However, there is a great degree of commonality between the two fields. A surprisingly wide range of problems in classical and modern statistics have a game theoretic component to them. Classically, the mathematical philosophy of statistics, particularly frequentist statistics, posits that the source of samples is potentially adversarial. This resulted in the rich theory of minimax statistical games and estimation. Boosting algorithms, which are often regarded as best off-the-shelf classifiers, can be viewed as playing a zero-sum game against a weak learner. To allow for various departures of “test environment” from “train environments”,
the emerging field of robust machine learning allows for adversarial manipulation of the train or test environments. Finally, an emerging class of density estimators in modern machine learning use an adversarial “critic” of the density
estimator to improve the final density estimation. The common theme among these classical and modern developments is an interplay between statistical estimation and multiplayer games. Statistical game theory is a unified analytical and algorithmic framework underlying all these classical and modern developments. This thesis aims to lay
the foundations of statistical game theory to address the above-mentioned (and many more) statistical problems. While our primary focus in this thesis is on minimax statistical estimation and boosting, the tools and techniques developed here are broadly applicable and are useful for studying other problems such as robust learning, and adversarial density estimation. Our work on minimax statistical estimation aims to provide efficient techniques
for algorithmically building minimax optimal estimators. These techniques automate the process of designing minimax estimators and can aid statisticians in building these estimators. For various fundamental problems such as mean estimation, and entropy estimation, our algorithmic minimax estimators match, if not beat, the performance of existing minimax estimators designed by statisticians. Our work on boosting aims to improve its performance and bring
it closer to neural networks’ performance. To this end, we develop a generalized boosting framework that combines weak classifiers using more complex forms of aggregation than additive combinations considered in traditional boosting. Our generalized boosting algorithms have better performance than traditional boosting and have performance close to neural networks.
History
Date
2021-08-31Degree Type
- Dissertation
Department
- Machine Learning
Degree Name
- Doctor of Philosophy (PhD)