Explanations for Natural Language: From Theory to Practice
Understanding the reasoning process through explanations is spontaneous, ubiquitous and fundamental to our sense of perceiving the world around us. Scientific progress often relies on explanations to facilitate discovery of hypotheses, identify applications and also identify systematic errors and correct them. An in-depth study of explanation thus help shed light on core cognitive issues, such as learning, induction and conceptual representation. Current NLP systems, despite significant advances, are usually treated as black boxes with little to no insight into how they reason. Learning with Explanations is an under-explored area in the natural language processing literature due to the lack of a unified theory.
In this work, we propose a theory towards how models can incorporate explanation. Our results in part of this show two promising approaches and corresponding instantiations as to how we can reliably incorporate explanations in NLP systems. We also show that explanations can not only help models in downstream tasks but could help humans improve upon a task
History
Date
2022-07-04Degree Type
- Dissertation
Department
- Language Technologies Institute
Degree Name
- Doctor of Philosophy (PhD)