Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs

This paper analyzes the iteration-complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. More specifically, the objective function is of the form f + h where f is a differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with a bounded domain. … Read more

Inexact scalarization proximal methods for multiobjective quasiconvex minimization on Hadamard manifold

In this paper we extend naturally the scalarization proximal point method to solve multiobjective unconstrained minimization problems, proposed by Apolinario et al.(2016), from Euclidean spaces to Hadamard manifolds for locally Lipschitz and quasiconvex vector objective functions. Moreover, we present a convergence analysis, under some mild assumptions on the multiobjective function, for two inexact variants of … Read more

On Relaxation of Some Customized Proximal Point Algorithms for Convex Minimization: From Variational Inequality Perspective

The proximal point algorithm (PPA) is a fundamental method for convex programming. When PPA applied to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define the proximal regularization, so that the computational burden of the resulted PPA can be reduced, and in most cases, even admit closed form … Read more

A proximal-Newton method for unconstrained convex optimization in Hilbert spaces

We propose and study the iteration-complexity of a proximal-Newton method for finding approximate solutions of the problem of minimizing a twice continuously differentiable convex function on a (possibly infinite dimensional) Hilbert space. We prove global convergence rates for obtaining approximate solutions in terms of function/gradient values. Our main results follow from an iteration-complexity study of … Read more

Positive-Indefinite Proximal Augmented Lagrangian Method and its Application to Full Jacobian Splitting for Multi-block Separable Convex Minimization Problems

The augmented Lagrangian method (ALM) is fundamental for solving convex programming problems with linear constraints. The proximal version of ALM, which regularizes ALM’s subproblem over the primal variable at each iteration by an additional positive-definite quadratic proximal term, has been well studied in the literature. In this paper, we show that it is not necessary … Read more

New analysis of linear convergence of gradient-type methods via unifying error bound conditions

The subject of linear convergence of gradient-type methods on non-strongly convex optimization has been widely studied by introducing several notions as sufficient conditions. Influential examples include the error bound property, the restricted strongly convex property, the quadratic growth property, and the Kurdyka-{\L}ojasiewicz property. In this paper, we first define a group of error bound conditions … Read more

Exact Worst-case Performance of First-order Methods for Composite Convex Optimization

We provide a framework for computing the exact worst-case performance of any algorithm belonging to a broad class of oracle-based first-order methods for composite convex optimization, including those performing explicit, projected, proximal, conditional and inexact (sub)gradient steps. We simultaneously obtain tight worst-case guarantees and explicit instances of optimization problems on which the algorithm reaches this … Read more

A GENERALIZED PROXIMAL LINEARIZED ALGORITHM FOR DC FUNCTIONS WITH APPLICATION TO THE OPTIMAL SIZE OF THE FIRM PROBLEM

A proximal linearized algorithm with a quasi distance as regularization term for minimizing a DC function (difference of two convex functions) is proposed. If the sequence generated by our algorithm is bounded, it is proved that every cluster point is a critical point of the function under consideration, even if minimizations are performed inexactly at … Read more

Error bounds, quadratic growth, and linear convergence of proximal methods

We show that the the error bound property, postulating that the step lengths of the proximal gradient method linearly bound the distance to the solution set, is equivalent to a standard quadratic growth condition. We exploit this equivalence in an analysis of asymptotic linear convergence of the proximal gradient algorithm for structured problems, which lack … Read more

A multiplier method with a class of penalty functions for convex programming

We consider a class of augmented Lagrangian methods for solving convex programming problems with inequality constraints. This class involves a family of penalty functions and specific values of parameters $p,q,\tilde y \in R$ and $c>0$. The penalty family includes the classical modified barrier and the exponential function. The associated proximal method for solving the dual … Read more