<p dir="ltr">Artificially intelligent (AI) agents are increasingly entrusted with making decisions in medium- to high stakes domains such as healthcare, education, transportation, and cybersecurity. In these applications, agents make sequences of decisions that influence real-world outcomes. Reinforcement learning (RL) offers a natural and powerful framework for training such agents through experience. However, despite recent progress, there exist key barriers to deployment and adoption. </p><p dir="ltr">First, a capable RL agent can behave in ways that violate human expectations. In collaborative or safety critical settings, unintuitive actions can confuse users or even create new risks. For example, an autonomous vehicle that suddenly swerves to avoid an accident may still be perceived as unsafe. This perception risks lack of adoption, even if the agent is a safer driver overall. Developing agents that exhibit intuitive behavior is therefore often a prerequisite for human-AI coordination and trust. Second, in safety-critical and regulated domains, the ability to explain and audit AI decisions is increasingly becoming a formal requirement. However, most RL agents make decisions using deep neural networks, which are challenging for people to understand. As a result, interpretable decision-making emerges as an important problem to address. Third, it is often challenging for designers to fully specify the range of desired behavior for an agent. Therefore, designers often leverage simpler proxy goals through fixed, simple reward functions. If this proxy goal is mis- or under-specified, agents can behave in ways that are misaligned with what people actually want. As a result, an important challenge is ensuring that agents are aligned with human intent, goals, and values. </p><p dir="ltr">These challenges all share a common theme: they arise because RL agents interact with or make decisions on behalf of humans in human environments. As a result, a key question for the future of AI is how to develop agents that operate well with people. This dissertation advances a human-centered approach to RL to build and investigate AI agents that are interpretable, intuitive, and aligned. Here, we present technical advances in designing and evaluating AI agents, addressing key research questions that emerge from human involvement. Toward the goal of intuitive behavior, we design the first RL agent to pass a navigation Turing test and investigate why people perceive its behavior as human-like. Turing toward the goal of interpretability, we identify and algorithms toward two new dimensions of interpretability in RL: maintaining transparency in multi-agent decision-making and reducing the reliance on human annotations. We contribute a new alignment framing (decision-making) and introduce an algorithm that learns policies whose decision-making aligns with human preferences. Toward the goal of behavioral alignment, we contribute a benchmark and datasets for training and evaluating agents on fuzzy, underspecified tasks. We conclude this thesis by discussing how future work can leverage these ideas toward AI agents that support human flourishing.</p>