Carnegie Mellon University
Browse
- No file added yet -

Sample-Efficient Reinforcement Learning with Applications in Nuclear Fusion

Download (20.39 MB)
thesis
posted on 2024-01-19, 21:28 authored by Viraj Mehta

 In many practical applications of reinforcement learning (RL), it is expensive to observe state transitions from the environment. In the problem of plasma control for nuclear fusion, the motivating example of this thesis, determining the next state for a given state-action pair requires querying an expensive transition function which can lead to many hours of computer simulation or dollars of scientific research. Such expensive data collection prohibits application of standard RL algorithms which usually require a large number of observations to learn. In this thesis, I address the problem of efficiently learning a policy from a relatively modest number of observations, motivated by the application of automated decision making and control to nuclear fusion. The first section presents four approaches developed to evaluate the prospective value of data in learning a good policy and discusses their performance, guarantees, and application. These approaches address the problem through the lenses of information theory, decision theory, the optimistic value gap, and learning from comparative feedback. We apply this last method to reinforcement learning from human feedback for the alignment of large language models. The second presents work which uses physical prior knowledge about the dynamics to more quickly learn an accurate model. Finally, I give an introduction to the problem setting of nuclear fusion, present recent work optimizing the design of plasma current rampdowns at the DIII-Dtokamak, and discuss future applications of AI in fusion 

History

Date

2023-12-07

Degree Type

  • Dissertation

Department

  • Computer Science

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Jeff Schneider

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC