A Doubly Accelerated Inexact Proximal Point Method for Nonconvex Composite Optimization Problems

This paper describes and establishes the iteration-complexity of a doubly accelerated inexact proximal point (D-AIPP) method for solving the nonconvex composite minimization problem whose objective function is of the form f+h where f is a (possibly nonconvex) differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with bounded domain. D-AIPP … Read more

On the complexity of solving feasibility problems

We consider feasibility problems defined by a set of constraints that exhibit gradient H\”older continuity plus additional constraints defined by the affordability of obtaining approximate minimizers of quadratic models onto the associated feasible set. Each iteration of the method introduced in this paper involves the approximate minimization of a two-norm regularized quadratic subject to the … Read more

Understanding the Acceleration Phenomenon via High-Resolution Differential Equations

Gradient-based optimization algorithms can be studied from the perspective of limiting or- dinary differential equations (ODEs). Motivated by the fact that existing ODEs do not distin- guish between two fundamentally different algorithms—Nesterov’s accelerated gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method—we study an alter- native limiting process that yields high-resolution ODEs. We … Read more

An Inexact First-order Method for Constrained Nonlinear Optimization

The primary focus of this paper is on designing inexact first-order methods for solving large-scale constrained nonlinear optimization problems. By controlling the inexactness of the subproblem solution, we can significantly reduce the computational cost needed for each iteration. A penalty parameter updating strategy during the subproblem solve enables the algorithm to automatically detect infeasibility. Global … Read more

The primal-dual hybrid gradient method reduces to a primal method for linearly constrained optimization problems

In this work, we show that for linearly constrained optimization problems the primal-dual hybrid gradient algorithm, analyzed by Chambolle and Pock [3], can be written as an entirely primal algorithm. This allows us to prove convergence of the iterates even in the degenerate cases when the linear system is inconsistent or when the strong duality … Read more

Golden Ratio Algorithms for Variational Inequalities

The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator $F$ and a proximal mapping $g$. … Read more

A Simple Nearly-Optimal Restart Scheme For Speeding-Up First-Order Methods

We present a simple scheme for restarting first-order methods for convex optimization problems. Restarts are made based only on achieving specified decreases in objective values, the specified amounts being the same for all optimization problems. Unlike existing restart schemes, the scheme makes no attempt to learn parameter values characterizing the structure of an optimization problem, … Read more

Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs

This paper analyzes the iteration-complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. More specifically, the objective function is of the form f + h where f is a differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with a bounded domain. … Read more

On the Convergence Rate of the Halpern-Iteration

In this work, we give a tight estimate of the rate of convergence for the Halpern-Iteration for approximating a fixed point of a nonexpansive mapping in a Hilbert space. Specifically, we prove that the norm of the residuals is upper bounded by the distance of the initial iterate to the closest fixed point divided by … Read more

A Level-set Method For Convex Optimization with a Feasible Solution Path

Large-scale constrained convex optimization problems arise in several application domains. First-order methods are good candidates to tackle such problems due to their low iteration complexity and memory requirement. The level-set framework extends the applicability of first-order methods to tackle problems with complicated convex objectives and constraint sets. Current methods based on this framework either rely … Read more