Simple examples for the failure of Newton’s method with line search for strictly convex minimization

In this paper two simple examples of a twice continuously differentiable strictly convex function $f$ are presented for which Newton’s method with line search converges to a point where the gradient of $f$ is not zero. The first example uses a line search based on the Wolfe conditions. For the second example, some strictly convex … Read more

Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method

We propose a limited memory steepest descent (LMSD) method for solving unconstrained optimization problems. As a steepest descent method, the step computation in each iteration requires the evaluation of a gradient of the objective function and the calculation of a scalar step size only. When employed to solve certain convex problems, our method reduces to … Read more

A Trust Region Algorithm with a Worst-Case Iteration Complexity of ${\cal O}(\epsilon^{-3/2})$ for Nonconvex Optimization

We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $\bar\epsilon \in (0,\infty)$, the algorithm requires at most $\mathcal{O}(\epsilon^{-3/2})$ iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $\epsilon \in (0,\bar\epsilon]$. This improves upon the $\mathcal{O}(\epsilon^{-2})$ bound known to hold for … Read more

On the worst case performance of the steepest descent algorithm for quadratic functions

\begin{abstract} The existing choices for the step lengths used by the classical steepest descent algorithm for minimizing a convex quadratic function require in the worst case $ \Or(C\log(1/\eps)) $ iterations to achieve a precision $ \eps $, where $ C $ is the Hessian condition number. We show how to construct a sequence of step … Read more

On efficiency of nonmonotone Armijo-type line searches

Monotonicity and nonmonotonicity play a key role in studying the global convergence and the efficiency of iterative schemes employed in the field of nonlinear optimization, where globally convergent and computationally efficient schemes are explored. This paper addresses some features of descent schemes and the motivation behind nonmonotone strategies and investigates the efficiency of an Armijo-type … Read more

A proximal point algorithm for DC functions on Hadamard manifolds

An extension of a proximal point algorithm for difference of two convex functions is presented in the context of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bounded it is proved that every cluster point is a critical point of the function (not necessarily convex) under consideration, even if … Read more

An improved algorithm for L2-Lp minimization problem

In this paper we consider a class of non-Lipschitz and non-convex minimization problems which generalize the L2−Lp minimization problem. We propose an iterative algorithm that decides the next iteration based on the local convexity/concavity/sparsity of its current position. We show that our algorithm finds an epsilon-KKT point within O(log(1/epsilon)) iterations. The same result is also … Read more

A regularized limited-memory BFGS method for unconstrained minimization problems

The limited-memory BFGS (L-BFGS) algorithm is a popular method of solving large-scale unconstrained minimization problems. Since L-BFGS conducts a line search with the Wolfe condition, it may require many function evaluations for ill-posed problems. To overcome this difficulty, we propose a method that combines L-BFGS with the regularized Newton method. The computational cost for a … Read more

Robust Block Coordinate Descent

In this paper we present a novel randomized block coordinate descent method for the minimization of a convex composite objective function. The method uses (approximate) partial second-order (curvature) information, so that the algorithm performance is more robust when applied to highly nonseparable or ill conditioned problems. We call the method Robust Coordinate Descent (RCD). At … Read more

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy … Read more