A progressive barrier derivative-free trust-region algorithm for constrained optimization

We study derivative-free constrained optimization problems and propose a trust-region method that builds linear or quadratic models around the best feasible and and around the best infeasible solutions found so far. These models are optimized within a trust region, and the progressive barrier methodology handles the constraints by progressively pushing the infeasible solutions toward the … Read more

A Sparsity Preserving Convexification Procedure for Indefinite Quadratic Programs Arising in Direct Optimal Control

Quadratic programs (QP) with an indefinite Hessian matrix arise naturally in some direct optimal control methods, e.g. as subproblems in a sequential quadratic programming (SQP) scheme. Typically, the Hessian is approximated with a positive definite matrix to ensure having a unique solution; such a procedure is called \emph{regularization}. We present a novel regularization method tailored … Read more

A Second-Order Information-Based Gradient and Function Sampling Method for Nonconvex, Nonsmooth Optimization

This paper has the goal to propose a gradient and function sampling method that under special circumstances moves superlinearly to a minimizer of a general class of nonsmooth and nonconvex functions. We present global and local convergence theory with illustrative examples that corroborate and elucidate the theoretical results obtained along the manuscript. ArticleDownload View PDF

Inexact Newton-Type Optimization with Iterated Sensitivities

This paper presents and analyzes an Inexact Newton-type optimization method based on Iterated Sensitivities (INIS). A particular class of Nonlinear Programming (NLP) problems is considered, where a subset of the variables is defined by nonlinear equality constraints. The proposed algorithm considers an arbitrary approximation for the Jacobian of these constraints. Unlike other inexact Newton methods, … Read more

On a conjecture in second-order optimality conditions

In this paper we deal with optimality conditions that can be verified by a nonlinear optimization algorithm, where only a single Lagrange multiplier is avaliable. In particular, we deal with a conjecture formulated in [R. Andreani, J.M. Martinez, M.L. Schuverdt, “On second-order optimality conditions for nonlinear programming”, Optimization, 56:529–542, 2007], which states that whenever a … Read more

New error measures and methods for realizing protein graphs from distance data

The interval Distance Geometry Problem (iDGP) consists in finding a realization in R^K of a simple undirected graph G=(V,E) with nonnegative intervals assigned to the edges in such a way that, for each edge, the Euclidean distance between the realization of the adjacent vertices is within the edge interval bounds. Our aim is to determine … Read more

Optimization Methods for Large-Scale Machine Learning

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of … Read more

Solving Box-Constrained Nonconvex Quadratic Programs

We present effective computational techniques for solving nonconvex quadratic programs with box constraints (BoxQP). We first observe that cutting planes obtained from the Boolean Quadric Polytope (BQP) are computationally effective at reducing the optimality gap of BoxQP. We next show that the Chvatal-Gomory closure of the BQP is given by the odd-cycle inequalities even when … Read more

Order-based error for managing ensembles of surrogates in derivative-free optimization

We investigate surrogate-assisted strategies for derivative-free optimization using the mesh adaptive direct search (MADS) blackbox optimization algorithm. In particular, we build an ensemble of surrogate models to be used within the search step of MADS, and examine different methods for selecting the best model for a given problem at hand. To do so, we introduce … Read more

Kronecker Product Constraints for Semidefinite Optimization

We consider semidefinite optimization problems that include constraints that G(x) and H(x) are positive semidefinite (PSD), where the components of the symmetric matrices G(x) and H(x) are affine functions of an n-vector x. In such a case we obtain a new constraint that a matrix K(x,X) is PSD, where the components of K(x,X) are affine … Read more