Carnegie Mellon University
Browse

Contextures: The Mechanism of Representation Learning

Download (1.53 MB)
thesis
posted on 2025-06-25, 17:52 authored by Runtian ZhaiRuntian Zhai

This dissertation establishes the contexture theory to mathematically characterize the mechanism of representation learning, also known as pretraining. Despite the remarkable empirical success of foundation models, it is not very clear what representations they learn, and why these representations are useful for various disparate downstream tasks. A scientific understanding of representation learning is critical, especially at this point when scaling up the model size is producing diminishing returns, and designing new pretraining methods is imperative for further progress.

Prior work treated different representation learning methods quite differently, whereas the contexture theory provides a unified framework for delineating the representations these methods learn. The central argument is that a representation is learned from the association between the input X and a context variable A. We prove that if an encoder captures the maximum information of this association, in which case we say that the encoder learns the contexture, then it will be optimal on the class of tasks that are compatible with the context. We also show that a context is the most useful when the association between X and A is neither too strong nor too weak. The important implication of the contexture theory is that increasing the model size alone will achieve diminishing returns, and further advancements require better contexts.

We demonstrate that lots of existing pretraining objectives can learn the contexture, including supervised learning, self-supervised learning, generative models, etc. Based on that, we introduce two general objectives-SVME and KISE, for learning the contexture. We also show how to mix multiple contexts together, which is an effortless way to create better contexts from existing ones. Then, we prove statistical learning bounds for representation learning, and extend the framework to spectrally transformed kernel regres- sion for semi-supervised learning. Finally, we discuss the effect of the data distribution shift from pretraining to the downstream task.

Funding

TAS::97 0400::TAS XRL: EXPLAINABLE REINFORCEMENT LEARNING FOR AI AUTONOMY

United States Department of the Air Force

Find out more...

BI-DIRECTIONAL HYBRID AI FOR ROBUST AUTONOMY

United States Department of the Air Force

Find out more...

History

Date

2025-04-25

Degree Type

  • Dissertation

Thesis Department

  • Computer Science

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Pradeep Ravikumar Zico Kolter

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC