A Fully Stochastic Second-Order Trust Region Method

A stochastic second-order trust region method is proposed, which can be viewed as a second-order extension of the trust-region-ish (TRish) algorithm proposed by Curtis et al. [INFORMS J. Optim. 1(3) 200–220, 2019]. In each iteration, a search direction is computed by (approximately) solving a trust region subproblem defined by stochastic gradient and Hessian estimates. The … Read more

Solving Large Scale Cubic Regularization by a Generalized Eigenvalue Problem

Cubic Regularization methods have several favorable properties. In particular under mild assumptions, they are globally convergent towards critical points with second order necessary conditions satisfied. Their adoption among practitioners, however, does not yet match the strong theoretical results. One of the reasons for this discrepancy may be additional implementation complexity needed to solve the occurring … Read more

Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates

We present a stochastic extension of the mesh adaptive direct search (MADS) algorithm originally developed for deterministic blackbox optimization. The algorithm, called StoMADS, considers the unconstrained optimization of an objective function f whose values can be computed only through a blackbox corrupted by some random noise following an unknown distribution. The proposed method is based … Read more

Worst-case Complexity Bounds of Directional Direct-search Methods for Multiobjective Optimization

Direct Multisearch is a well-established class of algorithms, suited for multiobjective derivative-free optimization. In this work, we analyze the worst-case complexity of this class of methods in its most general formulation for unconstrained optimization. Considering nonconvex smooth functions, we show that to drive a given criticality measure below a specific positive threshold, Direct Multisearch takes … Read more

Representation of the Pareto front for heterogeneous multi-objective optimization

Optimization problems with multiple objectives which are expensive, i.e. where function evaluations are time consuming, are difficult to solve. Finding at least one locally optimal solution is already a difficult task. In case only one of the objective functions is expensive while the others are cheap, for instance analytically given, this can be used in … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Stabilized Barzilai-Borwein method

The Barzilai-Borwein (BB) method is a popular and efficient tool for solving large-scale unconstrained optimization problems. Its search direction is the same as for the steepest descent (Cauchy) method, but its stepsize rule is different. Owing to this, it converges much faster than the Cauchy method. A feature of the BB method is that it … Read more

Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization

Abstract. We consider the Adaptive Regularization with Cubics approach for solving nonconvex optimization problems and propose a new variant based on inexact Hessian information chosen dynamically. The theoretical analysis of the proposed procedure is given. The key property of ARC framework, constituted by optimal worst-case function/derivative evaluation bounds for first- and second-order critical point, is … Read more

Hybrid methods for nonlinear least squares problems

This contribution contains a description and analysis of effective methods for minimization of the nonlinear least squares function $F(x) = (1/2) f^T(x) f(x)$, where $x \in R^n$ and $f \in R^m$, together with extensive computational tests and comparisons of the introduced methods. All hybrid methods are described in detail and their global convergence is proved … Read more

Numerical solution of generalized minimax problems

This contribution contains the description and investigation of four numerical methods for solving generalized minimax problems, which consists in the minimization of functions which are compositions of special smooth convex functions with maxima of smooth functions (the most important problem of this type is the sum of maxima of smooth functions). Section~1 is introductory. In … Read more