Machine Learning and Multiagent Preferences
thesisposted on 14.04.2021, 17:49 by Ritesh Noothigattu
One of the most well known settings dealing with multiagent preferences is voting and social choice. In classical social choice, each of the n agents presents a ranking over the m candidates, and the goal is to find a winning candidate
(or a consensus ranking) that is the most \fair" outcome. In this thesis, we consider several variants of this standard setting. For instance, the domain may have an uncountably infinite number of alternatives, and we need to learn each
voter's preferences from a few pairwise comparisons among them. Or, we have a markov decision process, and each voter's preferences are represented by its reward function. Can we find a consensus policy that everyone would be happy with? Another example is the setting of a conference peer review system, where the agents are the reviewers, and their preferences are given by the defining characteristics they use to accept a paper. Our goal is then to use these preferences to make consensus decisions for the entire conference. We also consider the setting where agents have utility functions over a given set of outcomes,
and our goal is to learn a classifier that is fair with respect to these preferences. Broadly speaking, this thesis tackles problems in three areas: (i) fairness in machine learning, (ii) voting and social choice, and (iii) reinforcement learning,
each of them handling multiagent preferences with machine learning.
- Doctor of Philosophy (PhD)