Defensible Explanations for Algorithmic Decisions about Writing in Education
This dissertation is a call for collaboration at the interdisciplinary intersection of natural language processing, explainable machine learning, philosophy of science, and education technology. If we want algorithm decision-making to be explainable, those decisions must be defensible by practitioners in a social context, rather than transparent about their technical and mathematical details. Moreover, I argue that a narrow view of explanation, specifically one focused on causal reasoning about deep neural networks, is unsuccessful even on its own terms. To that end, the rest of the thesis aims to build alternate, non-causal tools for explaining behavior of classification models.
My technical contributions study human judgments in two distinct domains. First, I study group decision-making, releasing a large scale corpus of structured data from Wikipedia’s deletion debates. I show how decisions can be predicted and debate outcomes explained based on social and discursive norms. Next, in automated essay scoring, I study a dataset of student writing, collected through an ongoing cross-institutional tool for academic advising and diagnostic for college readiness. Here, I explore the characteristics of essays that receive disparate scores, focusing on several topics including genre norms, fairness audits across race and gender, and investigative topic modeling. In both cases, I show how to evaluate and choose the most straightforward tools that effectively make predictions, advocating for classical approaches over deep neural methods when appropriate.
In my conclusion, I advance a new framework for building defensible explanations for trained models. Recognizing that explanations are constructed based on a scientific discourse, and that automated systems must be trustworthy for both developers and users, I develop success criteria for earning that trust. I conclude by connecting to critical theory, arguing that truly defensible algorithmic decision making must not only be explainable, but must be held accountable for the power structures it enables and extends.
History
Date
2020-09-28Degree Type
- Dissertation
Department
- Language Technologies Institute
Degree Name
- Doctor of Philosophy (PhD)