A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems

In this paper, we describe and establish iteration-complexity of two accelerated composite gradient (ACG) variants to solve a smooth nonconvex composite optimization problem whose objective function is the sum of a nonconvex differentiable function f with a Lipschitz continuous gradient and a simple nonsmooth closed convex function h. When f is convex, the first ACG … Read more

General Convergence Rates Follow From Specialized Rates Assuming Growth Bounds

Often in the analysis of first-order methods, assuming the existence of a quadratic growth bound (a generalization of strong convexity) facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once for the growth bounded case. We give a meta-theorem for deriving general convergence rates from those assuming … Read more

Line search and convergence in bound-constrained optimization

The first part of this paper discusses convergence properties of a new line search method for the optimization of continuously differentiable functions with Lipschitz continuous gradient. The line search uses (apart from the gradient at the current best point) function values only. After deriving properties of the new, in general curved, line search, global convergence … Read more

The condition number of a function relative to a set

The condition number of a differentiable convex function, namely the ratio of its smoothness to strong convexity constants, is closely tied to fundamental properties of the function. In particular, the condition number of a quadratic convex function is the square of the aspect ratio of a canonical ellipsoid associated to the function. Furthermore, the condition … Read more

A unified framework for Bregman proximal methods: subgradient, gradient, and accelerated gradient schemes

We provide a unified framework for analyzing the convergence of Bregman proximal first-order algorithms for convex minimization. Our framework hinges on properties of the convex conjugate and gives novel proofs of the convergence rates of the Bregman proximal subgradient, Bregman proximal gradient, and a new accelerated Bregman proximal gradient algorithm under fairly general and mild … Read more

An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems

This paper proposes an efficient adaptive variant of a quadratic penalty accelerated inexact proximal point (QP-AIPP) method proposed earlier by the authors. Both the QP-AIPP method and its variant solve linearly constrained nonconvex composite optimization problems using a quadratic penalty approach where the generated penalized subproblems are solved by a variant of the underlying AIPP … Read more

A Doubly Accelerated Inexact Proximal Point Method for Nonconvex Composite Optimization Problems

This paper describes and establishes the iteration-complexity of a doubly accelerated inexact proximal point (D-AIPP) method for solving the nonconvex composite minimization problem whose objective function is of the form f+h where f is a (possibly nonconvex) differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with bounded domain. D-AIPP … Read more

On the complexity of solving feasibility problems

We consider feasibility problems defined by a set of constraints that exhibit gradient H\”older continuity plus additional constraints defined by the affordability of obtaining approximate minimizers of quadratic models onto the associated feasible set. Each iteration of the method introduced in this paper involves the approximate minimization of a two-norm regularized quadratic subject to the … Read more

Understanding the Acceleration Phenomenon via High-Resolution Differential Equations

Gradient-based optimization algorithms can be studied from the perspective of limiting or- dinary differential equations (ODEs). Motivated by the fact that existing ODEs do not distin- guish between two fundamentally different algorithms—Nesterov’s accelerated gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method—we study an alter- native limiting process that yields high-resolution ODEs. We … Read more

An Inexact First-order Method for Constrained Nonlinear Optimization

The primary focus of this paper is on designing inexact first-order methods for solving large-scale constrained nonlinear optimization problems. By controlling the inexactness of the subproblem solution, we can significantly reduce the computational cost needed for each iteration. A penalty parameter updating strategy during the subproblem solve enables the algorithm to automatically detect infeasibility. Global … Read more