Properties of the block BFGS update and its application to the limited-memory block BNS method for unconstrained minimization.

A block version of the BFGS variable metric update formula and its modifications are investigated. In spite of the fact that this formula satisfies the quasi-Newton conditions with all used difference vectors and that the improvement of convergence is the best one in some sense for quadratic objective functions, for general functions it does not … Read more

A Line-Search Algorithm Inspired by the Adaptive Cubic Regularization Framework and Complexity Analysis

Adaptive regularized framework using cubics has emerged as an alternative to line-search and trust-region algorithms for smooth nonconvex optimization, with an optimal complexity amongst second-order methods. In this paper, we propose and analyze the use of an iteration dependent scaled norm in the adaptive regularized framework using cubics. Within such scaled norm, the obtained method … Read more

On the use of the energy norm in trust-region and adaptive cubic regularization subproblems

We consider solving unconstrained optimization problems by means of two popular globalization techniques: trust-region (TR) algorithms and adaptive regularized framework using cubics (ARC). Both techniques require the solution of a so-called “subproblem” in which a trial step is computed by solving an optimization problem involving an approximation of the objective function, called “the model”. The … Read more

Exploiting Negative Curvature in Deterministic and Stochastic Optimization

This paper addresses the question of whether it can be beneficial for an optimization algorithm to follow directions of negative curvature. Although some prior work has established convergence results for algorithms that integrate both descent and negative curvature directions, there has not yet been numerical evidence showing that such methods offer significant performance improvements. In … Read more

Quadratic regularization with cubic descent for unconstrained optimization

Cubic-regularization and trust-region methods with worst case first-order complexity $O(\varepsilon^{-3/2})$ and worst-case second-order complexity $O(\varepsilon^{-3})$ have been developed in the last few years. In this paper it is proved that the same complexities are achieved by means of a quadratic regularization method with a cubic sufficient-descent condition instead of the more usual predicted-reduction based descent. … Read more

Block BFGS Methods

We introduce a quasi-Newton method with block updates called Block BFGS. We show that this method, performed with inexact Armijo-Wolfe line searches, converges globally and superlinearly under the same convexity assumptions as BFGS. We also show that Block BFGS is globally convergent to a stationary point when applied to non-convex functions with bounded Hessian, and … Read more

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Recently, Al-Baali (2014) has extended the damped-technique in the modified BFGS method of Powell (1978) for Lagrange constrained optimization functions to the Broyden family of quasi-Newton methods for unconstrained optimization. Appropriate choices for the damped-parameter, which maintain the global and superlinear con- vergence property of these methods on convex functions and correct the Hessian approximations … Read more

Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method

We propose a limited memory steepest descent (LMSD) method for solving unconstrained optimization problems. As a steepest descent method, the step computation in each iteration requires the evaluation of a gradient of the objective function and the calculation of a scalar step size only. When employed to solve certain convex problems, our method reduces to … Read more

A Trust Region Algorithm with a Worst-Case Iteration Complexity of ${\cal O}(\epsilon^{-3/2})$ for Nonconvex Optimization

We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $\bar\epsilon \in (0,\infty)$, the algorithm requires at most $\mathcal{O}(\epsilon^{-3/2})$ iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $\epsilon \in (0,\bar\epsilon]$. This improves upon the $\mathcal{O}(\epsilon^{-2})$ bound known to hold for … Read more

On efficiency of nonmonotone Armijo-type line searches

Monotonicity and nonmonotonicity play a key role in studying the global convergence and the efficiency of iterative schemes employed in the field of nonlinear optimization, where globally convergent and computationally efficient schemes are explored. This paper addresses some features of descent schemes and the motivation behind nonmonotone strategies and investigates the efficiency of an Armijo-type … Read more