End-to-End Learning with Text & Knowledge Bases
Deep learning has been tremendously successful on tasks where the output prediction depends on a small, relatively clean input, such as a single image or a passage of text. But for many tasks, such as answering factoid questions or fact verification, the output depends on a broader context, or background knowledge, which goes beyond the input. Despite the best efforts to automatically create large knowledge bases, in most domains a majority of the information is only available in raw unstructured text. Harnessing this knowledge requires fine-grained language understanding at the document level, and scalable reasoning procedures to aggregate information across documents. While neural networks excel at the former, it is not clear how to scale them for the latter. On the other hand, symbolic representations such as knowledge graphs support scalable reasoning algorithms, but these are difficult to integrate with gradient-based learning.
This thesis develops methods which leverage the strength of both neural and symbolic approaches. Specifically, we augment raw text with symbolic structure about entities and their relations from a knowledge graph, and learn task-specific neural embeddings of the combined data structure. We also develop algorithms for doing multi-step reasoning over the embeddings in a differentiable manner, leading to end-to-end models for answering complex queries. Along the way we develop variants of recurrent and graph neural networks suited to modeling textual and multi-relational data, respectively, and use transfer learning to improve generalization. Through a series of experiments on factoid question answering, task-oriented dialogue, language modeling, and relation extraction, we show that our proposed models perform complex reasoning over rich fine-grained information.
History
Date
2020-05-30Degree Type
- Dissertation
Department
- Language Technologies Institute
Degree Name
- Doctor of Philosophy (PhD)