Worst-case Complexity Bounds of Directional Direct-search Methods for Multiobjective Optimization

Direct Multisearch is a well-established class of algorithms, suited for multiobjective derivative-free optimization. In this work, we analyze the worst-case complexity of this class of methods in its most general formulation for unconstrained optimization. Considering nonconvex smooth functions, we show that to drive a given criticality measure below a specific positive threshold, Direct Multisearch takes … Read more

Representation of the Pareto front for heterogeneous multi-objective optimization

Optimization problems with multiple objectives which are expensive, i.e. where function evaluations are time consuming, are difficult to solve. Finding at least one locally optimal solution is already a difficult task. In case only one of the objective functions is expensive while the others are cheap, for instance analytically given, this can be used in … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Stabilized Barzilai-Borwein method

The Barzilai-Borwein (BB) method is a popular and efficient tool for solving large-scale unconstrained optimization problems. Its search direction is the same as for the steepest descent (Cauchy) method, but its stepsize rule is different. Owing to this, it converges much faster than the Cauchy method. A feature of the BB method is that it … Read more

Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization

Abstract. We consider the Adaptive Regularization with Cubics approach for solving nonconvex optimization problems and propose a new variant based on inexact Hessian information chosen dynamically. The theoretical analysis of the proposed procedure is given. The key property of ARC framework, constituted by optimal worst-case function/derivative evaluation bounds for first- and second-order critical point, is … Read more

Hybrid methods for nonlinear least squares problems

This contribution contains a description and analysis of effective methods for minimization of the nonlinear least squares function $F(x) = (1/2) f^T(x) f(x)$, where $x \in R^n$ and $f \in R^m$, together with extensive computational tests and comparisons of the introduced methods. All hybrid methods are described in detail and their global convergence is proved … Read more

Numerical solution of generalized minimax problems

This contribution contains the description and investigation of four numerical methods for solving generalized minimax problems, which consists in the minimization of functions which are compositions of special smooth convex functions with maxima of smooth functions (the most important problem of this type is the sum of maxima of smooth functions). Section~1 is introductory. In … Read more

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization

In this paper, we analyze several methods for approximating gradients of noisy functions using only function values. These methods include finite differences, linear interpolation, Gaussian smoothing and smoothing on a unit sphere. The methods differ in the number of functions sampled, the choice of the sample points, and the way in which the gradient approximations … Read more

A Class of Stochastic Variance Reduced Methods with an Adaptive Stepsize

Stochastic variance reduced methods have recently surged into prominence for solving large scale optimization problems in the context of machine learning. Tan, Ma and Dai et al. first proposed the new stochastic variance reduced gradient (SVRG) method with the Barzilai-Borwein (BB) method to compute step sizes automatically, which performs well in practice. On this basis, … Read more

On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces

We show that the minimizers of regularized quadratic functions restricted to their natural Krylov spaces increase in Euclidean norm as the spaces expand. CitationTechnical Report RAL-TR-2019-005, STFC-Rutherford Appleton Laboratory, Oxfordshire, England, April 5th 2019ArticleDownload View PDF