Carnegie Mellon University
Browse

Causal Approaches Toward Computational Justice and Robust Generation: A Principled Path to Trustworthy AI

Download (13.24 MB)
thesis
posted on 2025-10-24, 19:16 authored by Zeyu TangZeyu Tang
<p dir="ltr">The integration of artificial intelligence (AI) systems into our daily lives creates a practical imperative to comprehensively understand their expanding role and influence. As automated decision-making and generative AI become increasingly prevalent, we must systematically characterize its implications across individual, community, and societal levels while developing robust technical tools for evaluating and enhancing trustworthy AI. In this dissertation, we present research works grounded in causal principles to address fundamental challenges in trustworthy AI. In particular, my dissertation is organized into two parts, each addressing a different set of research questions unified under my endeavor to achieve trust worthy and responsible AI: (I) causal principle guided pursuits of procedural and structural justice, and (II) causal principle guided robust foundation model generation. </p><p dir="ltr">Part I is dedicated to our efforts aiming to achieve procedural fairness and structural justice. We refer to “procedural and structural justice” as the fairness and justice considerations specifically pertaining to the data generation processes themselves, as well as the broader scope of investigation extending beyond a single prediction or decision-making algorithm. We investigate the theoretical attainability and optimality of a very commonly used group-level fairness notion, demonstrating how data characteristics fundamentally influence fairness guarantees. Moving beyond instantaneous fairness, we propose a dynamic and long term fairness goal that captures the interplay between decision-making and underlying data generating processes through causal modeling. We address disguised procedural unfairness by developing methods to decouple objectionable data generating components from neutral ones. We also expand fairness analyses to incorporate social determinants alongside traditional sensitive attributes, and bring formal and quantitative rigor to the characterization of nontrivial roles of contextual environments, providing a more comprehensive perspective toward achieving structural justice. </p><p dir="ltr">Part II is dedicated to our works developing approaches for robust generation with foundation models. “Robust generation” encompasses the development of (components of) foun dation models that can produce reliable, high-quality outputs while maintaining consistency and resilience across diverse scenarios, ensuring that generated content remains trustworthy, unbiased, and aligned with intended objectives through principled mechanisms that address inherent limitations. We develop a causality-guided debiasing framework for large language models (LLMs) that formulates and utilizes selection mechanisms through which social in formation affects decision-making, offering principled prompting strategies to debias LLM decisions. We also theoretically capture the inherent shortcoming of the purely autoregres sive way of LLM decoding, and provide a practical solution to selectively refine generated contents through a sliding reflection-window mechanism.</p>

History

Date

2025-09-17

Degree Type

  • Dissertation

Thesis Department

  • Philosophy

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Kun Zhang Peter Spirtes