Rethinking the Safety Case for Risk-Aware Social Embodied Intelligence
Achieving real-world robot safety requires more than avoiding risk—it demands embracing and managing it effectively. This thesis presents a safety case for risk-aware decision-making and behavior modeling in complex, multi-agent environments such as aviation and autonomous driving. We argue that safety stems from an agent’s ability to anticipate uncertainty, reason about intent, and act within operational boundaries defined by prior knowledge, rules of the road, social context, and historical precedent.
To enable safe and interpretable decision-making, we integrate MCTS with logic specification to improve rule adherence into learned policies for both single and multi-agent settings. We develop symbolic rule-mining methods using inductive logic programming, extracting interpretable constraints from both trajectories and crash reports. To address out-of-distribution risk, we propose a fusion framework that combines neural imitation learning with symbolic rule-based systems. Finally, to mitigate modelling risks we will talk about combining RAG with crash reports for grounded action arbitrations in complex settings.
To support learning from real-world behavior within aviation, we introduce three datasets: TrajAir, a social aerial navigation dataset; TartanAviation, a time-synced multimodal dataset for intent inference; and Amelia-48, a large-scale airport surface movement dataset across U.S. airports, enabling predictive analytics in air traffic management.
Together, these contributions and the tools developed along the way enable au tonomous systems to reason under uncertainty, incorporate diverse priors, and operate reliably in complex, real-world settings.
History
Date
2025-04-21Degree Type
- Dissertation
Thesis Department
- Robotics Institute
Degree Name
- Doctor of Philosophy (PhD)