Gradient Sliding for Composite Optimization

We consider in this paper a class of composite optimization problems whose objective function is given by the summation of a general smooth and nonsmooth component, together with a relatively simple nonsmooth term. We present a new class of first-order methods, namely the gradient sliding algorithms, which can skip the computation of the gradient for … Read more

An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems

We investigate the convergence of a forward-backward-forward proximal-type algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka-\L{}ojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions. ArticleDownload … Read more

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy … Read more

A proximal multiplier method for separable convex minimization

In this paper, we propose an inexact proximal multiplier method using proximal distances for solving convex minimization problems with a separable structure. The proposed method unified the work of Chen and Teboulle (PCPM method), Kyono and Fukushima (NPCPMM) and Auslender and Teboulle (EPDM) and extends the convergence properties for a class of phi-divergence distances. We … Read more

Calmness of linear programs under perturbations of all data: characterization and modulus

This paper provides operative point-based formulas (only involving the nominal data, and not data in a neighborhood) for computing or estimating the calmness modulus of the optimal set (argmin) mapping in linear optimization under uniqueness of nominal optimal solutions. Our analysis is developed in two different parametric settings. First, in the framework of canonical perturbations … Read more

A Novel Unified Approach to Invariance in Control

In this paper, we propose a novel, unified, general approach to investigate sufficient and necessary conditions under which four types of convex sets, polyhedra, polyhedral cones, ellipsoids and Lorenz cones, are invariant sets for a linear continuous or discrete dynamical system. In proving invariance of ellipsoids and Lorenz cones for discrete systems, instead of the … Read more

Application of the Strictly Contractive Peaceman-Rachford Splitting Method to Multi-block Separable Convex Programming

Recently, a strictly contractive Peaceman- Rachford splitting method (SC-PRSM) was proposed to solve a convex minimization model with linear constraints and a separable objective function which is the sum of two functions without coupled variables. We show by an example that the SC-PRSM cannot be directly extended to the case where the objective function is … Read more

A Second-Order Method for Compressed Sensing Problems with Coherent and Redundant Dictionaries

In this paper we are interested in the solution of Compressed Sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. CS problems of this type are convex with non-smooth and non-separable regularization term, therefore a specialized solver is required. We propose a primal-dual Newton Conjugate Gradients (pdNCG) method. … Read more

Primal-dual regularized SQP and SQCQP type methods for convex programming and their complexity analysis

This paper presents and studies the iteration-complexity of two new inexact variants of Rockafellar’s proximal method of multipliers (PMM) for solving convex programming (CP) problems with a fi nite number of functional inequality constraints. In contrast to the first variant which solves convex quadratic programming (QP) subproblems at every iteration, the second one solves convex constrained … Read more

AN INEQUALITY-CONSTRAINED SQP METHOD FOR EIGENVALUE OPTIMIZATION

We consider a problem in eigenvalue optimization, in particular find- ing a local minimizer of the spectral abscissa – the value of a parameter that results in the smallest magnitude of the largest real part of the spectrum of a matrix system. This is an important problem for the stabilization of control sys- tems. Many … Read more