Concise Complexity Analyses for Trust-Region Methods

Concise complexity analyses are presented for simple trust region algorithms for solving unconstrained optimization problems. In contrast to a traditional trust region algorithm, the algorithms considered in this paper require certain control over the choice of trust region radius after any successful iteration. The analyses highlight the essential algorithm components required to obtain certain complexity … Read more

Regional Complexity Analysis of Algorithms for Nonconvex Smooth Optimization

A strategy is proposed for characterizing the worst-case performance of algorithms for solving nonconvex smooth optimization problems. Contemporary analyses characterize worst-case performance by providing, under certain assumptions on an objective function, an upper bound on the number of iterations (or function or derivative evaluations) required until a pth-order stationarity condition is approximately satisfied. This arguably … Read more

A structured quasi-Newton algorithm for optimizing with incomplete Hessian information

We present a structured quasi-Newton algorithm for unconstrained optimization problems that have unavailable second-order derivatives or Hessian terms. We provide a formal derivation of the well-known BFGS secant update formula that approximates only the missing Hessian terms, and we propose a line-search quasi-Newton algorithm based on a modification of Wolfe conditions that converges to first-order … Read more

An Alternating Minimization Method for Matrix Completion Problem

In this paper, we focus on solving matrix completion problem arising from applications in the fields of information theory, statistics, engineering, etc. However, the matrix completion problem involves nonconvex rank constraints which make this type of problem difficult to handle. Traditional approaches use a nuclear norm surrogate to replace the rank constraints. The relaxed model … Read more

Subsampled Inexact Newton methods for minimizing large sums of convex functions

This paper deals with the minimization of large sum of convex functions by Inexact Newton (IN) methods employing subsampled Hessian approximations. The Conjugate Gradient method is used to compute the inexact Newton step and global convergence is enforced by a nonmonotone line search procedure. The aim is to obtain methods with affordable costs and fast … Read more

A Stochastic Trust Region Algorithm Based on Careful Step Normalization

An algorithm is proposed for solving stochastic and finite sum minimization problems. Based on a trust region methodology, the algorithm employs normalized steps, at least as long as the norms of the stochastic gradient estimates are within a specified interval. The complete algorithm—which dynamically chooses whether or not to employ normalized steps—is proved to have … Read more

On global minimizers of quadratic functions with cubic regularization

In this paper, we analyze some theoretical properties of the problem of minimizing a quadratic function with a cubic regularization term, arising in many methods for unconstrained and constrained optimization that have been proposed in the last years. First we show that, given any stationary point that is not a global solution, it is possible … Read more

Block Coordinate Descent Almost Surely Converges to a Stationary Point Satisfying the Second-order Necessary Condition

Given a non-convex twice continuously differentiable cost function with Lipschitz continuous gradient, we prove that all of the block coordinate gradient descent, block mirror descent and proximal block coordinate descent methods converge to stationary points satisfying the second-order necessary condition, almost surely with random initialization. All our results are ascribed to the center-stable manifold theorem … Read more

Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers

Many optimization algorithms converge to stationary points. When the underlying problem is nonconvex, they may get trapped at local minimizers and occasionally stagnate near saddle points. We propose the Run-and-Inspect Method, which adds an “inspect” phase to existing algorithms that helps escape from non-global stationary points. The inspection samples a set of points in a … Read more

On the use of third-order models with fourth-order regularization for unconstrained optimization

In a recent paper, it was shown that, for the smooth unconstrained optimization problem, worst-case evaluation complexity $O(\epsilon^{-(p+1)/p})$ may be obtained by means of algorithms that employ sequential approximate minimizations of p-th order Taylor models plus (p + 1)-th order regularization terms. The aforementioned result, which assumes Lipschitz continuity of the p-th partial derivatives, generalizes … Read more