Carnegie Mellon University
Browse
alan_mishler_thesis_v1.pdf (3.11 MB)

Auditing and Achieving Counterfactual Fairness

Download (3.11 MB)
thesis
posted on 2021-11-19, 21:55 authored by Alan MishlerAlan Mishler
Machine learning is increasingly involved in high stakes decisions in domains such as healthcare, criminal justice, and consumer finance. In these settings, ML models often take the form of Risk Assessment Instruments (RAIs): given covariates such as demographic information and an
individual's medical/criminal/financial history, the model predicts the likelihood of an adverse outcome, such as a dangerous medical event, recidivism, or default on a loan. Rather than rendering an automatic decision, the model produces a \risk score," which a decision maker may take into account when deciding whether to prescribe a medical treatment, release a defendant on bail, or issue a personal loan. The proliferation of machine learning has raised concerns that learned models may be discriminatory with respect to sensitive features like race, sex, age, and socioeconomic status. These concerns have led to an explosion of methods in recent years for developing fair models and auditing the fairness of existing models. The most widely discussed fairness criteria impose constraints on the joint distribution of a sensitive feature, a predictor, and an outcome. These \observational" fairness criteria are inappropriate for RAIs, however. RAIs are not concerned with the observable outcomes in the training data (\Did patients of this type historically experience serious complications?"), which are themselves a product of historical treatment decisions. Rather, they are concerned with the potential outcomes associated with available treatment decisions (\Would patients of this type experience complications if not treated?"). Because treatments are not assigned at random|doctors naturally treat the patients they think are at high risk|these are distinct questions. In this thesis, I consider counterfactual versions of common algorithmic fairness criteria, which are defined with respect to potential rather than observable outcomes. I develop methods to audit the fairness of existing predictors and build predictors which satisfy these fairness criteria. In Chapter 1, I show how the use of observable rather than potential outcomes in algorithmic RAIs can lead to worse outcomes compared to before the RAI was trained. In Chapter 2, I develop a postprocessing procedure that can render an existing binary predictor fair with respect to counterfactual
equalized odds, while maximizing its counterfactual accuracy. This procedure yields predictors whose excess risk and excess unfairness decay at pn rates when nuisance parameters are estimated sufficiently fast. I also provide estimators of the counterfactual risk and error rates of a large class of (possibly randomized) binary classifiers. These estimators are pn-consistent and asymptotically
normal under similar assumptions. I show that the post-processing procedure improves fairness on both simulated and real data, and that this does not necessarily incur a substantial decrease in accuracy. In Chapter 3, I develop a flexible framework for building predictors that are fair and accurate with respect to either observable or counterfactual outcomes. Within this framework, I propose three methods: the first minimizes risk subject to fairness constraints, the second minimizes unfairness subject to risk constraints, and the third incorporates a set of fairness penalty parameters that allow users to efficiently build large sets of predictors that trace out different paths in fairness-accuracy space. These methods accommodate users who wish to improve the fairness of an existing model without sacrificing accuracy, or vice versa. They also allows users to explore the tradeoffs between fairness and accuracy and between different fairness criteria in their problem, and they provide
flexibility in choosing a predictor with an appealing combination of risk and fairness properties. These predictors converge to oracle predictors at fast (up to pn) rates. This approach substantially improves both the fairness and accuracy of an existing commercial recidivism predictor, and it yields many predictors that perform comparably to or better than other fairness methods on an income prediction task, while allowing users much more flexibility in the final model form. Chapter 4 brie y considers the (un)fairness of randomized vs. deterministic classifiers.

History

Date

2021-07-29

Degree Type

  • Dissertation

Department

  • Statistics and Data Science

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Edward Kennedy

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC