Online First-Order Framework for Robust Convex Optimization

Robust optimization (RO) has emerged as one of the leading paradigms to efficiently model parameter uncertainty. The recent connections between RO and problems in statistics and machine learning domains demand for solving RO problems in ever more larger scale. However, the traditional approaches for solving RO formulations based on building and solving robust counterparts or … Read more

Robust optimization of noisy blackbox problems using the Mesh Adaptive Direct Search algorithm

Blackbox optimization problems are often contaminated with numerical noise, and direct search methods such as the Mesh Adaptive Direct Search (MADS) algorithm may get stuck at solutions artificially created by the noise. We propose a way to smooth out the objective function of an unconstrained problem using previously evaluated function evaluations, rather than resampling points. … Read more

Towards Simulation Based Mixed-Integer Optimization with Differential Equations

We propose a decomposition based method for solving mixed-integer nonlinear optimization problems with “black-box” nonlinearities, where the latter, e.g., may arise due to differential equations or expensive simulation runs. The method alternatingly solves a mixed-integer linear master problem and a separation problem for iteratively refining the mixed-integer linear relaxation of the nonlinearity. We prove that … Read more

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS

We present a MATLAB implementation of the shape-changing sym- metric rank-one (SC-SR1) method that solves trust-region subproblems when a limited-memory symmetric rank-one (L-SR1) matrix is used in place of the true Hessian matrix. The method takes advantage of two shape-changing norms [4, 3] to decompose the trust-region subproblem into two separate problems. Using one of … Read more

A derivative-free trust-region augmented Lagrangian algorithm

We present a new derivative-free trust-region (DFTR) algorithm to solve general nonlinear constrained problems with the use of an augmented Lagrangian method. No derivatives are used, neither for the objective function nor for the constraints. An augmented Lagrangian method, known as an effective tool to solve equality and inequality constrained optimization problems with derivatives, is … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ($\SFO$). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case … Read more

Solving Highly Detailed Gas Transport MINLPs: Block Separability and Penalty Alternating Direction Methods

Detailed modeling of gas transport problems leads to nonlinear and nonconvex mixed-integer optimization or feasibility models (MINLPs) because both the incorporation of discrete controls of the network as well as accurate physical and technical modeling is required in order to achieve practical solutions. Hence, ignoring certain parts of the physics model is not valid for … Read more

Globally Convergent Levenberg-Marquardt Method For Phase Retrieval

In this paper, we consider a nonlinear least squares model for the phase retrieval problem. Since the Hessian matrix may not be positive definite and the Gauss-Newton (GN) matrix is singular at any optimal solution, we propose a modified Levenberg-Marquardt (LM) method, where the Hessian is substituted by a summation of the GN matrix and … Read more

A Multilevel Proximal Gradient Algorithm for a Class of Composite Optimization Problems

Composite optimization models consist of the minimization of the sum of a smooth (not necessarily convex) function and a non-smooth convex function. Such models arise in many applications where, in addition to the composite nature of the objective function, a hierarchy of models is readily available. It is common to take advantage of this hierarchy … Read more

Regularized monotonic regression

Monotonic (isotonic) Regression (MR) is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the MR formulated … Read more