Algebraic rules for quadratic regularization of Newton’s method

In this work we propose a class of quasi-Newton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newton’s method, with algebraic explicit rules for computing the regularizing parameter. The convergence properties of this class of methods are analysed. We show that if the … Read more

An adaptive accelerated first-order method for convex optimization

This paper presents a new accelerated variant of Nesterov’s method for solving composite convex optimization problems in which certain acceleration parameters are adaptively (and aggressively) chosen so as to substantially improve its practical performance compared to existing accelerated variants while at the same time preserve the optimal iteration-complexity shared by these methods. Computational results are … Read more

A first-order block-decomposition method for solving two-easy-block structured semidefinite programs

In this paper, we consider a first-order block-decomposition method for minimizing the sum of a convex differentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions with easily computable resolvents. The method presented contains two important ingredients from a computational point of view, namely: an adaptive choice of stepsize for … Read more

A class of Fejer convergent algorithms, approximate resolvents and the Hybrid Proximal-Extragradient method

A new framework for analyzing Fejer convergent algorithms is presented. Using this framework we define a very general class of Fejer convergent algorithms and establish its convergence properties. We also introduce a new definition of approximations of resolvents which preserve some useful features of the exact resolvent, and use this concept to present an unifying … Read more

A robust Kantorovich’s theorem on inexact Newton method with relative residual error tolerance

We prove that under semi-local assumptions, the inexact Newton method with a fixed relative residual error tolerance converges Q-linearly to a zero of the non-linear operator under consideration. Using this result we show that Newton method for minimizing a self-concordant function or to find a zero of an analytic function can be implemented with a … Read more

A quadratically convergent Newton method for vector optimization

We propose a Newton method for solving smooth unconstrained vector optimization problems under partial orders induced by general closed convex pointed cones. The method extends the one proposed by Fliege, Grana Drummond and Svaiter for multicriteria, which in turn is an extension of the classical Newton method for scalar optimization. The steplength is chosen by … Read more

Implementation of a block-decomposition algorithm for solving large-scale conic semidefinite programming problems

In this paper, we consider block-decomposition first-order methods for solving large-scale conic semidefinite programming problems. Several ingredients are introduced to speed-up the method in its pure form such as: an aggressive choice of stepsize for performing the extragradient step; use of scaled inner products in the primal and dual spaces; dynamic update of the scaled … Read more

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

This paper presents an accelerated variant of the hybrid proximal extragradient (HPE) method for convex optimization, referred to as the accelerated HPE (A-HPE) method. Iteration-complexity results are established for the A-HPE method, as well as a special version of it, where a large stepsize condition is imposed. Two specific implementations of the A-HPE method are … Read more

Iteration-Complexity of a Newton Proximal Extragradient Method for Monotone Variational Inequalities and Inclusion Problems

In a recent paper by Monteiro and Svaiter, a hybrid proximal extragradient framework has been used to study the iteration-complexity of a first-order (or, in the context of optimization, second-order) method for solving monotone nonlinear equations. The purpose of this paper is to extend this analysis to study a prox-type first-order method for monotone smooth … Read more

Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods

In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka-Lojasiewicz inequality. This assumption allows to cover a … Read more