Carnegie Mellon University
Browse

Statistical Learning Under Adversarial Distribution Shift

Download (2.57 MB)
thesis
posted on 2022-10-24, 21:37 authored by Chen DanChen Dan

One of the most fundamental assumptions in statistical machine learning is that training and testing data should be sampled independently from the same distribution. However, modern real world applications require that the learning algorithm should perform robustly even when this assumption is no longer valid. Specifically, the training and testing distributions may shift slightly (yet adversarially) within a small neighborhood of each other. This formulation encompasses many new challenges in machine learning, including adversarial examples, outlier contaminated data, group fairness and label imbalance.

In this thesis, we seek to understand the statistical optimality and provide better algorithms under the aforementioned adversarial distribution shift. Our contributions include (1) the first near optimal minimax lower bound for the sample complexity of adversarially robust classification in a Gaussian setting. (2) introducing the framework of distributional and outlier robust optimization, which allowed us to apply distributionally robust optimization to large scale experiments with deep neural networks and outperformed existing methods in sub-population shift tasks. (3) margin sensitive group risk, a principled way of improving distributional robust generalization via group-asymmetric margin maximization.

History

Date

2022-08-25

Degree Type

  • Dissertation

Department

  • Computer Science

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Pradeep Ravikumar

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC