Exploiting Prior Function Evaluations in Derivative-Free Optimization

A derivative-free optimization (DFO) algorithm is presented. The distinguishing feature of the algorithm is that it allows for the use of function values that have been made available through prior runs of a DFO algorithm for solving prior related optimization problems. Applications in which sequences of related optimization problems are solved such that the proposed … Read more

Using an Analytical Computational-Geometry Library to Model Nonoverlap and Boundary-Distance Constraints and their Application to Packing Poly-Bézier Shapes

In this paper we will show how to model nonoverlap as well as uniform and nonuniform boundary-distance constraints between poly-Bézier shapes using an analytical computational-geometry library. We then use this capability to develop, implement and analyze analytical-optimization solutions to minimum-area rectangular-boundary packing-problems as well as minimum-area one- and two-dimensional puzzle-piece packing-problems. In the process, we … Read more

OPM, a collection of Optimization Problems in Matlab

OPM is a small collection of CUTEst unconstrained and bound-constrained nonlinear optimization problems, which can be used in Matlab for testing optimization algorithms directly (i.e. without installing additional software). Article Download View OPM, a collection of Optimization Problems in Matlab

Simple odd beta-cycle inequalities for binary polynomial optimization

We consider the multilinear polytope which arises naturally in binary polynomial optimization. Del Pia and Di Gregorio introduced the class of odd beta-cycle inequalities valid for this polytope, showed that these generally have Chvátal rank 2 with respect to the standard relaxation and that, together with flower inequalities, they yield a perfect formulation for cycle … Read more

Parallel Strategies for Direct Multisearch

Direct Multisearch (DMS) is a Derivative-free Optimization class of algorithms suited for computing approximations to the complete Pareto front of a given Multiobjective Optimization problem. It has a well-supported convergence analysis and simple implementations present a good numerical performance, both in academic test sets and in real applications. Recently, this numerical performance was improved with … Read more

Complexity of a Projected Newton-CG Method for Optimization with Bounds

This paper describes a method for solving smooth nonconvex minimization problems subject to bound constraints with good worst-case complexity and practical performance. The method contains elements of two existing methods: the classical gradient projection approach for bound-constrained optimization and a recently proposed Newton-conjugate gradient algorithm for unconstrained nonconvex optimization. Using a new definition of approximate … Read more

Using first-order information in Direct Multisearch for multiobjective optimization

Derivatives are an important tool for single-objective optimization. In fact, it is commonly accepted that derivative-based methods present a better performance than derivative-free optimization approaches. In this work, we will show that the same does not apply to multiobjective derivative-based optimization, when the goal is to compute an approximation to the complete Pareto front of … Read more

On complexity and convergence of high-order coordinate descent algorithms

Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order $\varepsilon$-stationarity with respect to the variables of each coordinate-descent block is $O(\varepsilon^{-(p+1)/p})$ whereas the computer work for getting first-order $\varepsilon$-stationarity with … Read more

High-order Evaluation Complexity of a Stochastic Adaptive Regularization Algorithm for Nonconvex Optimization Using Inexact Function Evaluations and Randomly Perturbed Derivatives

A stochastic adaptive regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing strong approximate minimizers of any order for inexpensively constrained smooth optimization problems. For an objective function with Lipschitz continuous p-th derivative in a convex neighbourhood of the feasible set and given an arbitrary optimality order q, it … Read more

Approximate solution of system of equations arising in interior-point methods for bound-constrained optimization

The focus in this paper is interior-point methods for bound-constrained nonlinear optimization where the system of nonlinear equations that arise are solved with Newton’s method. There is a trade-off between solving Newton systems directly, which give high quality solutions, and solving many approximate Newton systems which are computationally less expensive but give lower quality solutions. … Read more