Randomized Subspace Derivative-Free Optimization with Quadratic Models and Second-Order Convergence

We consider model-based derivative-free optimization (DFO) for large-scale problems, based on iterative minimization in random subspaces. We provide the first worst-case complexity bound for such methods for convergence to approximate second-order critical points, and show that these bounds have significantly improved dimension dependence compared to standard full-space methods, provided low accuracy solutions are desired and/or … Read more

TRFD: A derivative-free trust-region method based on finite differences for composite nonsmooth optimization

In this work we present TRFD, a derivative-free trust-region method based on finite differences for minimizing composite functions of the form \(f(x)=h(F(x))\), where \(F\) is a black-box function assumed to have a Lipschitz continuous Jacobian, and \(h\) is a known convex Lipschitz function, possibly nonsmooth. The method approximates the Jacobian of \(F\) via forward finite … Read more

A Two Stepsize SQP Method for Nonlinear Equality Constrained Stochastic Optimization

We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing a stochastic objective function subject to deterministic equality constraints. The method utilizes two different stepsizes, one which exclusively scales the component of the step corrupted by the variance of the stochastic gradient estimates and a second which scales the entire step. We prove that this … Read more

Complexity results and active-set identification of a derivative-free method for bound-constrained problems

In this paper, we analyze a derivative-free line search method designed for bound-constrained problems. Our analysis demonstrates that this method exhibits a worst-case complexity comparable to other derivative-free methods for unconstrained and linearly constrained problems. In particular, when minimizing a function with $n$ variables, we prove that at most ${\cal O}(n\epsilon^{-2})$ iterations are needed to … Read more

A worst-case complexity analysis for Riemannian non-monotone line-search methods

In this paper we deal with non-monotone line-search methods to minimize a smooth cost function on a Riemannian manifold. In particular, we study the number of iterations necessary for this class of algorithms to obtain e-approximated stationary points. Specifically, we prove that under a regularity Lipschitz-type condition on the pullbacks of the cost function to … Read more

Worst-case evaluation complexity of a derivative-free quadratic regularization method

This short paper presents a derivative-free quadratic regularization method for unconstrained minimization of a smooth function with Lipschitz continuous gradient. At each iteration, trial points are computed by minimizing a quadratic regularization of a local model of the objective function. The models are based on forward finite-difference gradient approximations. By using a suitable acceptance condition … Read more

An Adaptive Trust-Region Method Without Function Evaluations

In this paper we propose an adaptive trust-region method for smooth unconstrained optimization. The update rule for the trust-region radius relies only on gradient evaluations. Assuming that the gradient of the objective function is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations required by the proposed method to generate approximate … Read more

A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression

Nonlinear conjugate gradients are among the most popular techniques for solving continuous optimization problems. Although these schemes have long been studied from a global convergence standpoint, their worst-case complexity properties have yet to be fully understood, especially in the nonconvex setting. In particular, it is unclear whether nonlinear conjugate gradient methods possess better guarantees than … Read more

Worst-Case Complexity of an SQP Method for Nonlinear Equality Constrained Stochastic Optimization

A worst-case complexity bound is proved for a sequential quadratic optimization (commonly known as SQP) algorithm that has been designed for solving optimization problems involving a stochastic objective function and deterministic nonlinear equality constraints. Barring additional terms that arise due to the adaptivity of the monotonically nonincreasing merit parameter sequence, the proved complexity bound is … Read more

Model-Based Derivative-Free Methods for Convex-Constrained Optimization

We present a model-based derivative-free method for optimization subject to general convex constraints, which we assume are unrelaxable and accessed only through a projection operator that is cheap to evaluate. We prove global convergence and a worst-case complexity of $O(\epsilon^{-2})$ iterations and objective evaluations for nonconvex functions, matching results for the unconstrained case. We introduce … Read more