A Hybrid Gradient Method for Strictly Convex Quadratic Programming

In this paper, a reliable hybrid algorithm for solving convex quadratic minimization problems is presented. At each iteration, two points are computed: first, an auxiliary point $\dot{x}_k$ is generated by performing a gradient step equipped with an optimal steplength, then, the next iterate $x_{k+1}$ is obtained through a weighted sum of $\dot{x}_k$ with the penultimate … Read more

Strong Evaluation Complexity Bounds for Arbitrary-Order Optimization of Nonconvex Nonsmooth Composite Functions

We introduce the concept of strong high-order approximate minimizers for nonconvex optimization problems. These apply in both standard smooth and composite non-smooth settings, and additionally allow convex or inexpensive constraints. An adaptive regularization algorithm is then proposed to find such approximate minimizers. Under suitable Lipschitz continuity assumptions, whenever the feasible set is convex, it is … Read more

Modeling Hessian-vector products in nonlinear optimization: New Hessian-free methods

In this paper, we suggest two ways of calculating interpolation models for unconstrained smooth nonlinear optimization when Hessian-vector products are available. The main idea is to interpolate the objective function using a quadratic on a set of points around the current one and concurrently using the curvature information from products of the Hessian times appropriate … Read more

Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization

Worst-case complexity guarantees for nonconvex optimization algorithms have been a topic of growing interest. Multiple frameworks that achieve the best known complexity bounds among a broad class of first- and second-order strategies have been proposed. These methods have often been designed primarily with complexity guarantees in mind and, as a result, represent a departure from … Read more

A Fully Stochastic Second-Order Trust Region Method

A stochastic second-order trust region method is proposed, which can be viewed as a second-order extension of the trust-region-ish (TRish) algorithm proposed by Curtis et al. [INFORMS J. Optim. 1(3) 200–220, 2019]. In each iteration, a search direction is computed by (approximately) solving a trust region subproblem defined by stochastic gradient and Hessian estimates. The … Read more

Solving Large Scale Cubic Regularization by a Generalized Eigenvalue Problem

Cubic Regularization methods have several favorable properties. In particular under mild assumptions, they are globally convergent towards critical points with second order necessary conditions satisfied. Their adoption among practitioners, however, does not yet match the strong theoretical results. One of the reasons for this discrepancy may be additional implementation complexity needed to solve the occurring … Read more

Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates

We present a stochastic extension of the mesh adaptive direct search (MADS) algorithm originally developed for deterministic blackbox optimization. The algorithm, called StoMADS, considers the unconstrained optimization of an objective function f whose values can be computed only through a blackbox corrupted by some random noise following an unknown distribution. The proposed method is based … Read more

Worst-case Complexity Bounds of Directional Direct-search Methods for Multiobjective Optimization

Direct Multisearch is a well-established class of algorithms, suited for multiobjective derivative-free optimization. In this work, we analyze the worst-case complexity of this class of methods in its most general formulation for unconstrained optimization. Considering nonconvex smooth functions, we show that to drive a given criticality measure below a specific positive threshold, Direct Multisearch takes … Read more

Representation of the Pareto front for heterogeneous multi-objective optimization

Optimization problems with multiple objectives which are expensive, i.e. where function evaluations are time consuming, are difficult to solve. Finding at least one locally optimal solution is already a difficult task. In case only one of the objective functions is expensive while the others are cheap, for instance analytically given, this can be used in … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more