Nonlinear discrete optimization problems arise in many different disciplines, given the modeling versatility associated with nonlinear constraints and discrete decision variables. In Process Systems Engineering, such problems appear in applications ranging from optimal process design and synthesis, process planning, scheduling, and control, and molecular design. Albeit their many applications that arise from its universal modeling capabilities, finding optimal solutions to these optimization problems is a challenging task, given the computational complexity associated with their solution. The design of novel algorithms and the correct modeling of these problems arise among the different ways to overcome this complexity. In particular, tackling these problems with the correct combination of mathematical modeling and solution procedure is an efficient strategy to address them. The objective of this Thesis is to propose new solutions and modeling methods for nonlinear discrete optimization problems, which lead to improvements with respect to the existing solution approaches. We initially pose the discrete nonlinear optimization problems in the context of Mathematical Programming. The problems that we consider solving here can be classified as Mixed-Integer Nonlinear Programming (MINLP) problems. In Chapter 2 we provide a review on the different solution algorithms and existing software to deterministically solve a subclass of MINLP problems called convex MINLP. Among those algorithms, we consider the Outer-approximation (OA) method, which decomposes the MINLP into a Mixed-Integer Nonlinear Programming (MILP) problem and a Nonlinear Programming (NLP) problem. We perform a large computational study comparing the performance of more than sixteen different software implementations, solvers, by solving over 350 convex MINLP problems from benchmark library MINLPLib. This large study allowed us to identify how the different solvers perform based on features from the problem to be solved. Chapter 3 presents the implementation of the feasibility pump algorithm in the commercial MINLP solver DICOPT. This algorithm is being used as a preprocessing step to enhance the solver’s capabilities to find feasible solutions early in the search for the optimal solutions. The approach described and implemented in this chapter improved the solver performance becoming the default setting for DICOPT when solving convex MINLP problems. In Chapter 4 we propose a new algorithm for convex MINLP, the Center-cut algorithm.
Using a decomposition of the problem similar to OA, this algorithm relies on finding the Chebyshev center of the linear approximation of the nonlinear constraints. Although this algorithm is deterministic, in the sense that we provide convergence guarantees for it, it behaved remarkably well in finding feasible solutions quickly. Chapter 5 presents the derivation of scaled quadratic underestimators for convex functions and their usage in an OA framework, denoted Outer-approximation with quadratic cuts (OA-QCUT). Using those quadratic underestimators, the decomposition in OA then requires the solution of a Mixed-Integer Quadratically Constrained Programming (MIQCP) problem, which more closely underestimates the nonlinearities in convex MINLP problems, achieving a reduction in iterations when solving these problems. Chapter 6 then presents another modification of OA, where auxiliary Mixed-Integer
Quadratic Programming (MIQP) problems are solved at each iteration of OA. These auxiliary MIQP problems minimize a quadratic distance metric to the best-found solution in the algorithm while guaranteeing an improvement in the estimated objective function, hence stabilizing the OA method. These methods are successful at reducing the total number of iterations in OA at the expense of the solution of the auxiliary problem, which we prove needs not be solved to optimality, leading to performance improvements of the OA method when solving convex MINLP. Chapter 7 generalizes the concepts of Chapter 6 by showing that the auxiliary mixed integer problem can have any regularization objective function. We prove the convergence
guarantees of this method and show its equivalence of integrating a trust-region constraint in OA. This method is denoted Regularized Outer-approximation (ROA). We implemented these algorithms as part of the open-sourceMixed-integer nonlinear decomposition toolbox
for Pyomo - MindtPy and tested them extensively with all convex MINLP problems in the library MINLPLib. The results suggest an improvement of the existing OA and LP/NLP