On a conjecture in second-order optimality conditions

In this paper we deal with optimality conditions that can be verified by a nonlinear optimization algorithm, where only a single Lagrange multiplier is avaliable. In particular, we deal with a conjecture formulated in [R. Andreani, J.M. Martinez, M.L. Schuverdt, “On second-order optimality conditions for nonlinear programming”, Optimization, 56:529–542, 2007], which states that whenever a … Read more

Optimization Methods for Large-Scale Machine Learning

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of … Read more

New error measures and methods for realizing protein graphs from distance data

The interval Distance Geometry Problem (iDGP) consists in finding a realization in R^K of a simple undirected graph G=(V,E) with nonnegative intervals assigned to the edges in such a way that, for each edge, the Euclidean distance between the realization of the adjacent vertices is within the edge interval bounds. Our aim is to determine … Read more

Solving Box-Constrained Nonconvex Quadratic Programs

We present effective computational techniques for solving nonconvex quadratic programs with box constraints (BoxQP). We first observe that cutting planes obtained from the Boolean Quadric Polytope (BQP) are computationally effective at reducing the optimality gap of BoxQP. We next show that the Chvatal-Gomory closure of the BQP is given by the odd-cycle inequalities even when … Read more

Order-based error for managing ensembles of surrogates in derivative-free optimization

We investigate surrogate-assisted strategies for derivative-free optimization using the mesh adaptive direct search (MADS) blackbox optimization algorithm. In particular, we build an ensemble of surrogate models to be used within the search step of MADS, and examine different methods for selecting the best model for a given problem at hand. To do so, we introduce … Read more

Kronecker Product Constraints for Semidefinite Optimization

We consider semidefinite optimization problems that include constraints that G(x) and H(x) are positive semidefinite (PSD), where the components of the symmetric matrices G(x) and H(x) are affine functions of an n-vector x. In such a case we obtain a new constraint that a matrix K(x,X) is PSD, where the components of K(x,X) are affine … Read more

The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM

We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among … Read more

Efficient Symmetric Hessian Propagation for Direct Optimal Control

Direct optimal control algorithms first discretize the continuous-time optimal control problem and then solve the resulting finite dimensional optimization problem. If Newton type optimization algorithms are used for solving the discretized problem, accurate first as well as second order sensitivity information needs to be computed. This article develops a novel approach for computing Hessian matrices … Read more

Lifted Collocation Integrators for Direct Optimal Control in ACADO Toolkit

This paper presents a class of efficient Newton-type algorithms for solving the nonlinear programs (NLPs) arising from applying a direct collocation approach to continuous time optimal control. The idea is based on an implicit lifting technique including a condensing and expansion step, such that the structure of each subproblem corresponds to that of the multiple … Read more

A note on the squared slack variables technique for nonlinear optimization

In constrained nonlinear optimization, the squared slack variables can be used to transform a problem with inequality constraints into a problem containing only equality constraints. This reformulation is usually not considered in the modern literature, mainly because of possible numerical instabilities. However, this argument only concerns the development of algorithms, and nothing stops us in … Read more