Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

The adaptive cubic regularization algorithms described in Cartis, Gould & Toint (2009, 2010) for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy … Read more

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtained for derivative-free methods by Vicente … Read more

On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(epsilon^{-2}) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(epsilon^{-2}) evaluations known for the steepest descent … Read more

Compressed Sensing: How sharp is the RIP?

Consider a measurement matrix A of size n×N, with n < N, y a signal in R^N, and b = Ay the observed measurement of the vector y. From knowledge of (b,A), compressed sensing seeks to recover the k-sparse x, k < n, which minimizes ||b-Ax||. Using various methods of analysis — convex polytopes, geometric … Read more

Phase Transitions for Greedy Sparse Approximation Algorithms

A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous Restricted Isometry Property (RIP) [9] to have optimal-order uniform recovery guarantees. However, it is … Read more

An adaptive cubic regularisation algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in … Read more

Adaptive cubic overestimation methods for unconstrained optimization

An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization is proposed, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, Univ. of Cambridge), an algorithm by Nesterov & Polyak (Math. Programming 108(1), 2006, pp 177-205) and a proposal by Weiser, Deuflhard & Erdmann (Optim. Methods Softw. 22(3), 2007, … Read more

Finding a point in the relative interior of a polyhedron

A new initialization or `Phase I’ strategy for feasible interior point methods for linear programming is proposed that computes a point on the primal-dual central path associated with the linear program. Provided there exist primal-dual strictly feasible points — an all-pervasive assumption in interior point method theory that implies the existence of the central path … Read more

On the Convergence of a Primal-Dual Second-Order Corrector Interior Point Algorithm for Linear Programming

The Primal-Dual Second Order Corrector (PDSOC) algorithm that we investigate computes on each iteration a corrector direction in addition to the direction of the standard primal-dual path-following interior point method (Kojima et al., 1989) for Linear Programming (LP), in an attempt to improve performance. The corrector is multiplied by the square of the stepsize in … Read more