Fairness Methods in Optimization and Artificial Intelligence
With increasing deployment of optimization and Artificial Intelligence (AI) to assist high-stake real life decisions, fairness has become an essential factor of consideration for both the designers and users of these tools. This dissertation studies new approaches for formulating, attaining and eliciting fairness. Chapter one begins with a brief introduction of the background on fairness and a selection of common fairness measures.
Chapter two studies balancing fairness and efficiency through optimization models. We propose new social welfare functions (SWFs) as combined measures of two well-known criteria, Rawlsian leximax fairness and utilitarianism. We then design a procedure to sequentially maximize these SWFs with mixed integer/linear programming models to find socially optimal solutions. This approach has practical potentials on a wide range of resource allocation applications, and is demonstrated on realistic size applications in healthcare provision and shelter assignment for disaster preparation.
Chapter three considers an optimization task motivated by fair machine learning (ML). When developing fair ML algorithms, it is useful to understand the computational costs of fairness in comparison to the standard non-fair setting. For fair ML methods that utilize optimization models for training, specialized optimization algorithms have potentials to offer better computational performances than generic solvers. In this chapter, I explore this question for support vector machines (SVMs), and design block coordinate descent type algorithms to train SVMs containing linear fairness constraints. Numerical experiments highlight that the new specialized algorithms are more efficient than an off-the-shelf solver for training fair SVMs.
Chapter four examines social welfare optimization as a general paradigm for formalizing welfare-based fairness in AI systems. Contrary to commonly used statistical bias metrics in fair AI, optimizing a social welfare objective supports broader perspective on fairness motivated by distributive justice considerations. We propose in-processing and post-processing integration schemes between social welfare optimization and AI, in particular, ML and rule-based AI. We implement and evaluate the integration schemes on a simulated loan processing instance. The empirical results demonstrate the advantages of the proposed integration strategies. We conclude this chapter by highlighting research directions to pursue for a holistic view of welfare-based fairness. The next two chapters explore the human-centric perspective to elicit people’s moral values through preference learning. Chapter five studies a general preference learning framework based on online learning (OL) from revealed preferences: a learner learns an agent’s private utility function through interactions in a changing environment. Through designing a new convex loss function, wedesign a flexible OL framework that enables a unified treatment of usual loss functions from literature and supports a variety of online convex optimization algorithms. This framework has advantages in regret performance and solution time over other OL algorithms from the literature.
Lastly, chapter six explores a moral decision-making inspired task. This chapter considers the modelling and elicitation of people’s dynamic ethical judgments in the sequential allocation of resources. We utilize a Markov Decision Process model to represent a sequential allocation task, where the state rewards capture people’s moral preferences, thus people’s ethical judgments are reflected via policy rewards. We design a preference inference model which relies on active preference-based reward learning to infer the unknown reward function. The learning framework is applied in human-subject experiments on Amazon Mechanical Turk to understand people’s moral reasoning in a hypothetical scenario of allocating scarce medical resources.
- Tepper School of Business
- Doctor of Philosophy (PhD)