Carnegie Mellon University
Browse

Combining Deep Learning and Physics Models for Efficient and Robust Architectures

Download (40.43 MB)
thesis
posted on 2023-01-09, 19:20 authored by Filipe De Avila BelbutFilipe De Avila Belbut

Over the last decade, deep learning methods have achieved success in diverse domains, becoming one of the most widely employed approaches in artificial intelligence. These recent successes have also motivated their application in physics domains, such as solving differential equations, or predicting the motion of objects or the behavior of fluids. 

Deep learning methods have as their strengths their extreme flexibility, allowing complex dynamics to be learned directly from data, and their proven track record working directly on unstructured, high-dimensional domains (such as image and video processing). However, these approaches also face some issues, such as difficulty in generalizing outside the training domain, large data requirements, and costly training. Traditional models of physics, on the other hand, have been developed to be universally valid within their domain of application (i.e., generalizable) and require little to no data for modelling. 

In this proposal, we present methods for leveraging the strengths of both types of approaches, by combining deep learning and physics models. This allows for the development of deep learning architectures that are more dataefficient and robust to generalization than their standard, “physics-unaware” counterparts. 

These methods fall under two broad categories: differentiable physics layers and physics-informed learning approaches. In the first group of methods, we embed full physics simulators as layers into deep learning models, fully constraining their outputs to match the underlying dynamics. By having these simulations be fully differentiable, we maintain the ability to train these systems end-to-end. We present the application of such methods to problems in rigid body and fluid dynamics.

Physics-informed learning methods provide information about the underlying physics in the form of physics-informed loss terms, which regularize the model’s outputs to be physically consistent. We present a method to improve these approaches in order to learn parameterized systems of differential equations more efficiently. We also analyze theoretically and empirically the usage of sinusoidal neural networks to address known issues, such as spectral bias, in neural networks performing physics-informed learning. 

Funding

Robert Bosch GMBH

Defense Advanced Research Projects Agency

History

Date

2022-09-27

Degree Type

  • Dissertation

Department

  • Computer Science

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Zico Kolter