Carnegie Mellon University
Browse
hnh_Tepper_2019.pdf (2.26 MB)

Models and Efficient Algorithms for Convex Optimization under Uncertainty

Download (2.26 MB)
thesis
posted on 2019-08-20, 20:23 authored by Hung Ho-nguyenHung Ho-nguyen
Optimization is a key analytical technique used for quantitative decision-making in real-world problems. In practice, many situations call for decision-making in the face of incomplete knowledge and/or dynamic environments. Making high-quality decisions in these settings requires optimization techniques that are designed to account for uncertainty. Furthermore, as new technologies are developed, more complex higher-dimensional optimization models become prevalent. This dissertation examines various models for optimization under uncertainty, as well as efficient algorithms for solving such models that are scalable as the model size grows.
We study three models for optimization under uncertainty: robust optimization (RO), joint estimation-optimization (JEO), and joint prediction-optimization (JPO). Robust optimization accounts for inexact information by finding solutions, which remain feasible to all perturbations of
inputs within a given uncertainty set. Joint estimation-optimization considers a dynamic setting where inputs are updated over time as new data is collected and converge to some ideal input that is
not revealed to the modeller. Joint prediction-optimization considers the use of a prediction model to obtain optimization inputs from side information, an approach that is widely used amongst practitioners.
The dissertation considers theoretical properties and algorithmic performance guarantees for these three models.
We first present a generic framework to derive primal-dual algorithms for both RO and JEO. Previously, algorithms for such models were derived in an ad-hoc manner, and analyzed on a case-by-case basis. Our framework considers both of these optimization under uncertainty models
through a common lens of saddle point problems. By analyzing these, we highlight three quantities which directly bound the performance guarantees for our respective models, and show how regret
minimization techniques from online convex optimization can be used to control these three quantities. Thus, our framework allows us to transfer regret bounds for these quantities into performance guarantees for the associated algorithms. Since regret minimization algorithms from online convex
optimization are key to our framework, we also examine these, and in particular derive improved regret bounds for RO and JEO in the presence of favourable structure such as strong convexity
and smoothness. We show that a number of previous algorithms for both robust optimization and joint estimation optimization can be derived from our uni ed framework. More importantly, our framework can be used to derive more efficient algorithms for both models in a principled manner. For robust
optimization, our framework is used to derive algorithms that can drastically reduce the cost of iterative methods by replacing nominal oracles with cheaper first-order updates. For joint estimation optimization, we derive algorithms for the non-smooth strongly convex setting, which has not been considered previously.
We demonstrate the use of our framework through two examples: robust quadratic programming with ellipsoidal uncertainty sets, and dynamic non-parametric choice model estimation. For robust quadratic programming, we analyze the trust-region subproblem (TRS). The TRS is the
well-studied problem of minimizing a non-convex quadratic function over the unit ball, and it arises naturally in the context of robust quadratic constraints. We give a second-order cone based convexi cation of TRS which, in contrast to previous work, is still in the space of original variables.
We then show how to apply this convexication to robust quadratic programming, and derive two efficient algorithms for it using our framework. We carry out a numerical study on robust portfolio optimization problems, and the numerical results show improvement of our approach over previous approaches in the high-dimensional regime. We frame dynamic non-parametric choice model estimation as an instance of JEO. A particular challenge in this setting is the high-dimensionality of the resulting primal problem. Nevertheless, our generic primal-dual framework encompassing JEO applications is quite flexible and allows us to derive algorithms that can bypass this high dimensionality challenge. We test our approach for non-parametric choice estimation computationally, and highlight interesting trade-o s between data updating and convergence rates. Finally, we give a joint analysis of prediction and optimization. A natural performance measure in this setting is the optimality gap. Unfortunately, it is difficult to directly tune prediction models using this performance measure due to its non-convexity. We thus characterize sufficient conditions under which the more common prediction performance measures arising in statistics/machine learning, such as squared error, can be related to the true optimality gap performance measure. We derive conditions on a performance measure that guarantee that the optimality gap will be minimized, and give an explicit relationship between the squared error and the optimality gap. Such conditions allow practitioners to choose prediction methods for obtaining optimization parameters in a more judicious manner.

History

Date

2019-05-18

Degree Type

  • Dissertation

Department

  • Tepper School of Business

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Fatma Kilinc-Karzan

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC