mshediva_phd_mld_2021.pdf (6.63 MB)
Download file

Principles of Learning in Multitask Settings: A Probabilistic Perspective

Download (6.63 MB)
posted on 21.04.2022, 19:56 authored by Maruan Al-shedivatMaruan Al-shedivat

Today, machine learning is transitioning from research to widespread deployment. This transition requires algorithms that can learn from heterogeneous datasets and models that can operate in complex, often multitask settings. So, is there a set of principles we could follow when designing models and algorithms for such settings?

In this thesis, we approach this question from a probabilistic perspective, develop a declarative framework for representing, analyzing, and solving different multitask

learning problems, and consider multiple case studies ranging from multi-agent games, to multilingual translation, to federated learning and personalization. The ideas presented in this thesis are organized as follows. First, we introduce our core probabilistic multitask modeling framework. Starting with a general definition of a learning task, we show how multiple related tasks can be assembled into and represented by a joint probabilistic model. We then ddefine differrnt notions of generalization in multitask settings and demonstrate how to derive practical

learning algorithms and consistent objective functions that enable certain types of generalization using techniques from probabilistic learning and inference. Next, we illustrate our proposed framework thorough multiple concrete case studies. Each of our case studies is an independent vignette that focuses on a particular domain and showcases the versatility of our framework. Not only we reinterpret

different problems from a probabilistic standpoint, but we also develop new learning algorithms and inference techniques that improve upon the current state-of-the-art

in each of the considered domains.




Degree Type



Machine Learning

Degree Name

  • Doctor of Philosophy (PhD)


Eric Xing