Randomized Subspace Derivative-Free Optimization with Quadratic Models and Second-Order Convergence

We consider model-based derivative-free optimization (DFO) for large-scale problems, based on iterative minimization in random subspaces. We provide the first worst-case complexity bound for such methods for convergence to approximate second-order critical points, and show that these bounds have significantly improved dimension dependence compared to standard full-space methods, provided low accuracy solutions are desired and/or … Read more

TRFD: A derivative-free trust-region method based on finite differences for composite nonsmooth optimization

In this work we present TRFD, a derivative-free trust-region method based on finite differences for minimizing composite functions of the form \(f(x)=h(F(x))\), where \(F\) is a black-box function assumed to have a Lipschitz continuous Jacobian, and \(h\) is a known convex Lipschitz function, possibly nonsmooth. The method approximates the Jacobian of \(F\) via forward finite … Read more

Nonlinear Derivative-free Constrained Optimization with a Mixed Penalty-Logarithmic Barrier Approach and Direct Search

In this work, we propose the joint use of a mixed penalty-logarithmic barrier approach and generating set search, for addressing nonlinearly constrained derivative-free optimization problems. A merit function is considered, wherein the set of inequality constraints is divided into two groups: one treated with a logarithmic barrier approach, and another, along with the equality constraints, … Read more

Black-box Optimization Algorithms for Regularized Least-squares Problems

We consider the problem of optimizing the sum of a smooth, nonconvex function for which derivatives are unavailable, and a convex, nonsmooth function with easy-to-evaluate proximal operator. Of particular focus is the case where the smooth part has a nonlinear least-squares structure. We adapt two existing approaches for derivative-free optimization of nonsmooth compositions of smooth … Read more

Model Construction for Convex-Constrained Derivative-Free Optimization

We develop a new approximation theory for linear and quadratic interpolation models, suitable for use in convex-constrained derivative-free optimization (DFO). Most existing model-based DFO methods for constrained problems assume the ability to construct sufficiently accurate approximations via interpolation, but the standard notions of accuracy (designed for unconstrained problems) may not be achievable by only sampling … Read more

An Inexact Restoration Direct Multisearch Filter Approach to Multiobjective Constrained Derivative-free Optimization

Direct Multisearch (DMS) is a well-established class of methods for multiobjective derivative-free optimization, where constraints are addressed by an extreme barrier approach, only evaluating feasible points. In this work, we propose a filter approach, combined with an inexact feasibility restoration step, to address constraints in the DMS framework. The filter approach treats feasibility as an … Read more

Fidelity and interruption control for expensive constrained multi-fidelity blackbox optimization

This work introduces a novel blackbox optimization algorithm for computationally expensive constrained multi-fidelity problems. When applying a direct search method to such problems, the scarcity of feasible points may lead to numerous costly evaluations spent on infeasible points. Our proposed fidelity and interruption controlled optimization algorithm addresses this issue by leveraging multi-fidelity information, allowing for … Read more

The limitation of neural nets for approximation and optimization

We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems. While neural networks are widely used for machine learning tasks such as classification and regression, their application in solving optimization problems has been limited. Our study begins by determining the best activation function … Read more

Full-low evaluation methods for bound and linearly constrained derivative-free optimization

Derivative-free optimization (DFO) consists in finding the best value of an objective function without relying on derivatives. To tackle such problems, one may build approximate derivatives, using for instance finite-difference estimates. One may also design algorithmic strategies that perform space exploration and seek improvement over the current point. The first type of strategy often provides … Read more

Using orthogonally structured positive bases for constructing positive k-spanning sets with cosine measure guarantees

Positive spanning sets span a given vector space by nonnegative linear combinations of their elements. These have attracted significant attention in recent years, owing to their extensive use in derivative-free optimization. In this setting, the quality of a positive spanning set is assessed through its cosine measure, a geometric quantity that expresses how well such … Read more