Carnegie Mellon University
Browse
Sauk_cmu_0041E_10565.pdf (815.95 kB)

GPU Algorithms for Large-Scale Optimization

Download (815.95 kB)
thesis
posted on 2020-11-05, 20:32 authored by Benjamin SaukBenjamin Sauk
Data-driven modeling and optimization are both core elements in process systems engineering. When first-principle models are unavailable, empirical models can serve as a surrogate for the underlying physics of a process. The advent of sophisticated sensors and increased digital storage capabilities presents the chemical industry with vast quantities of data and the opportunity to generate more realistic surrogate models. Despite the need to scale up data-driven modeling algorithms, there has been limited work in developing parallel computing techniques for model building. The goals of this dissertation are to develop efficient parallel algorithms for model building, and investigate parallel approaches for optimization of linear programming problems. In Chapter 2, we review techniques used for generating linear models and discuss using
QR factorization to generate empirical models from tall and skinny matrices. We describe high performance parallel implementations for the linear least squares problem solved with QR factorization. We discover that state-of-the-art algorithms are not optimized for tall and skinny linear least squares problems. We propose the use of derivative-free optimization and simulation optimization to optimize the performance of QR factorization routines in the MAGMA [93] library. Results demonstrate that the performance of solving the linear least squares problems can be accelerated with the use of optimal tuning parameters identified by derivative-free optimization. In Chapter 3, we revisit the problem of algorithmic parameter tuning. We review the literature on auto-tuning techniques and propose a hybrid derivative-free optimization strategy for auto-tuning. Our approach combines local and global derivative-free optimization with the multi-armed bandit function. We implement our hybrid algorithm in an open-source tool, HybridTuner, furthering the development of sophisticated autotuners and assisting
others in optimizing code performance without exhaustive tuning. After addressing how to efficiently generate linear models, we describe approaches for identifying a sparse set of linear regressors. In Chapter 4, we propose to solve the best subset selection problem with backward stepwise elimination. We develop a theoretical guarantee on the accuracy of backward stepwise elimination using the supermodularity ratio, and compare its efficacy against other subset selection strategies like the lasso [166], and forward stepwise selection [58]. Finally, in Chapter 5, we discuss methods for parallelizing the simplex algorithm. We
consider previous methods for solving linear programming problems and parallelize the block-LU update [70] with GPU computing. We solve quantile regression linear programming problems up to a factor of 4.03x faster than a sequential primal simplex algorithm. We also demonstrate that our approach is resistant to numerical instability when solving quantile regression problems.

History

Date

2020-07-23

Degree Type

  • Dissertation

Department

  • Chemical Engineering

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Nikolaos V. Sahinidis

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC