Complexity results and active-set identification of a derivative-free method for bound-constrained problems

In this paper, we analyze a derivative-free line search method designed for bound-constrained problems. Our analysis demonstrates that this method exhibits a worst-case complexity comparable to other derivative-free methods for unconstrained and linearly constrained problems. In particular, when minimizing a function with $n$ variables, we prove that at most ${\cal O(n\epsilon^{-2})}$ iterations are needed to … Read more

Routing a fleet of unmanned aerial vehicles: a trajectory optimisation-based framework

We consider an aerial survey operation in which a fleet of unmanned aerial vehicles (UAVs) is required to visit several locations and then land in one of the available landing sites while optimising some performance criteria, subject to operational constraints and flight dynamics. We aim to minimise the maximum flight time of the UAVs. To … Read more

A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity

A fully stochastic second-order adaptive-regularization method for unconstrained nonconvex optimization is presented which never computes the objective-function value, but yet achieves the optimal $\mathcal{O}(\epsilon^{-3/2})$ complexity bound for finding first-order critical points. The method is noise-tolerant and the inexactness conditions required for convergence depend on the history of past steps. Applications to cases where derivative evaluation … Read more

S2MPJ and CUTEst optimization problems for Matlab, Python and Julia

A new decoder for the SIF test problems of the \cutest\ collection is described, which produces problem files allowing the computation of values and derivatives of the objective function and constraints of most \cutest\ problems directly within “native” Matlab, Python or Julia, without any additional installation or interfacing with MEX files or Fortran programs. When … Read more

A progressive decoupling algorithm for minimizing the difference of convex and weakly convex functions over a linear subspace

Commonly, decomposition and splitting techniques for optimization problems strongly depend on convexity. Implementable splitting methods for nonconvex and nonsmooth optimization problems are scarce and often lack convergence guarantees. Among the few exceptions is the Progressive Decoupling Algorithm (PDA), which has local convergence should convexity be elicitable. In this work, we furnish PDA with a descent … Read more

Globally Convergent Derivative-Free Methods in Nonconvex Optimization with and without Noise

This paper addresses the study of nonconvex derivative-free optimization problems, where only information of either smooth objective functions or their noisy approximations is available. General derivative-free methods are proposed for minimizing differentiable (not necessarily convex) functions with globally Lipschitz continuous gradients, where the accuracy of approximate gradients is interacting with stepsizes and exact gradient values. … Read more

A four-operator splitting algorithm for nonconvex and nonsmooth optimization

In this work, we address a class of nonconvex nonsmooth optimization problems where the objective function is the sum of two smooth functions (one of which is proximable) and two nonsmooth functions (one proper, closed and proximable, and the other continuous and weakly concave). We introduce a new splitting algorithm that extends the Davis-Yin splitting … Read more

Complexity of Adagrad and other first-order methods for nonconvex optimization problems with bounds constraints

A parametric class of trust-region algorithms for constrained nonconvex optimization is analyzed, where the objective function is never computed. By defining appropriate first-order stationarity criteria, we are able to extend the Adagrad method to the newly considered problem and retrieve the standard complexity rate of the projected gradient method that uses both the gradient and … Read more

Composite optimization models via proximal gradient method with a novel enhanced adaptive stepsize

We first consider the convex composite optimization models with the local Lipschitzness condition imposed on the gradient of the differentiable term. The classical proximal gradient method will be studied with our novel enhanced adaptive stepsize selection. To obtain the convergence of the proposed algorithm, we establish a sufficient decrease type inequality associated with our new … Read more

Optimization without Retraction on the Random Generalized Stiefel Manifold

Optimization over the set of matrices \(X\) that satisfy \(X^\top B X = I_p\), referred to as the generalized Stiefel manifold, appears in many applications involving sampled covariance matrices such as the canonical correlation analysis (CCA), independent component analysis (ICA), and the generalized eigenvalue problem (GEVP). Solving these problems is typically done by iterative methods … Read more