A derivative-free trust-region approach for Low Order-Value Optimization problems

The Low Order-Value Optimization (LOVO) problem involves minimizing the minimum among a finite number of function values within a feasible set. LOVO has several practical applications such as robust parameter estimation, protein alignment, portfolio optimization, among others. In this work, we are interested in the constrained nonlinear optimization LOVO problem of minimizing the minimum between … Read more

A Finite-Difference Trust-Region Method for Convexly Constrained Smooth Optimization

We propose a derivative-free trust-region method based on finite-difference gradient approximations for smooth optimization problems with convex constraints. The proposed method does not require computing an approximate stationarity measure. For nonconvex problems, we establish a worst-case complexity bound of \(\mathcal{O}\!\left(n\left(\frac{L}{\sigma}\epsilon\right)^{-2}\right)\) function evaluations for the method to reach an \(\left(\frac{L}{\sigma}\epsilon\right)\)-approximate stationary point, where \(n\) is the … Read more

A linesearch-based derivative-free method for noisy black-box problems

In this work we consider unconstrained optimization problems. The objective function is known through a zeroth order stochastic oracle that gives an estimate of the true objective function. To solve these problems, we propose a derivativefree algorithm based on extrapolation techniques. Under reasonable assumptions we are able to prove convergence properties for the proposed algorithms. … Read more

A Framework for Explainable Knowledge Generation with Expensive Sample Evaluations

Real world problems often require complex modeling and computation efforts to be effectively addressed. Relying solely on data-driven approaches without integrating physics-based models can result in limited predictive capabilities. Even advanced techniques such as deep learning may be impractical for decision-makers due to the lack of data and challenges in justifying and explaining results. In … Read more

Optimistic Noise-Aware Sequential Quadratic Programming for Equality Constrained Optimization with Rank-Deficient Jacobians

We propose and analyze a sequential quadratic programming algorithm for minimizing a noisy nonlinear smooth function subject to noisy nonlinear smooth equality constraints. The algorithm uses a step decomposition strategy and, as a result, is robust to potential rank-deficiency in the constraints, allows for two different step size strategies, and has an early stopping mechanism. … Read more

Fully Adaptive Zeroth-Order Method for Minimizing Functions with Compressible Gradients

We propose an adaptive zeroth-order method for minimizing differentiable functions with L-Lipschitz continuous gradients. The method is designed to take advantage of the eventual compressibility of the gradient of the objective function, but it does not require knowledge of the approximate sparsity level s or the Lipschitz constant L of the gradient. We show that … Read more

Randomized Subspace Derivative-Free Optimization with Quadratic Models and Second-Order Convergence

We consider model-based derivative-free optimization (DFO) for large-scale problems, based on iterative minimization in random subspaces. We provide the first worst-case complexity bound for such methods for convergence to approximate second-order critical points, and show that these bounds have significantly improved dimension dependence compared to standard full-space methods, provided low accuracy solutions are desired and/or … Read more

TRFD: A Derivative-Free Trust-Region Method Based on Finite Differences for Composite Nonsmooth Optimization

In this work we present TRFD, a derivative-free trust-region method based on finite differences for minimizing composite functions of the form \(f(x)=h(F(x))\), where \(F\) is a black-box function assumed to have a Lipschitz continuous Jacobian, and \(h\) is a known convex Lipschitz function, possibly nonsmooth. The method approximates the Jacobian of \(F\) via forward finite … Read more

Nonlinear Derivative-free Constrained Optimization with a Mixed Penalty-Logarithmic Barrier Approach and Direct Search

In this work, we propose the joint use of a mixed penalty-logarithmic barrier approach and generating set search, for addressing nonlinearly constrained derivative-free optimization problems. A merit function is considered, wherein the set of inequality constraints is divided into two groups: one treated with a logarithmic barrier approach, and another, along with the equality constraints, … Read more

Black-box Optimization Algorithms for Regularized Least-squares Problems

We consider the problem of optimizing the sum of a smooth, nonconvex function for which derivatives are unavailable, and a convex, nonsmooth function with easy-to-evaluate proximal operator. Of particular focus is the case where the smooth part has a nonlinear least-squares structure. We adapt two existing approaches for derivative-free optimization of nonsmooth compositions of smooth … Read more