Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however … Read more

Convergence rate analysis of several splitting schemes

Splitting schemes are a class of powerful algorithms that solve complicated monotone inclusions and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which the simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable algorithms, which often obtain nearly state-of-the-art performance. … Read more

Splitting methods with variable metric for KL functions

We study the convergence of general abstract descent methods applied to a lower semicontinuous nonconvex function f that satis es the Kurdyka-Lojasiewicz inequality in a Hilbert space. We prove that any precompact sequence converges to a critical point of f and obtain new convergence rates both for the values and the iterates. The analysis covers alternating … Read more

A dynamic gradient approach to Pareto optimization with nonsmooth nonconvex objective functions

In a general Hilbert framework, we consider continuous gradient-like dynamical systems for constrained multiobjective optimization involving non-smooth convex objective functions. Our approach is in the line of a previous work where was considered the case of convex di erentiable objective functions. Based on the Yosida regularization of the subdi erential operators involved in the system, we obtain … Read more

Exact duality in semidefinite programming based on elementary reformulations

In semidefinite programming (SDP), unlike in linear programming, Farkas’ lemma may fail to prove infeasibility. Here we obtain an exact, short certificate of infeasibility in SDP by an elementary approach: we reformulate any equality constrained semidefinite system using only elementary row operations, and rotations. When the system is infeasible, the infeasibility of the reformulated system … Read more

Gradient Sliding for Composite Optimization

We consider in this paper a class of composite optimization problems whose objective function is given by the summation of a general smooth and nonsmooth component, together with a relatively simple nonsmooth term. We present a new class of first-order methods, namely the gradient sliding algorithms, which can skip the computation of the gradient for … Read more

An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems

We investigate the convergence of a forward-backward-forward proximal-type algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka-\L{}ojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions. Article … Read more

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy … Read more

A proximal multiplier method for separable convex minimization

In this paper, we propose an inexact proximal multiplier method using proximal distances for solving convex minimization problems with a separable structure. The proposed method unified the work of Chen and Teboulle (PCPM method), Kyono and Fukushima (NPCPMM) and Auslender and Teboulle (EPDM) and extends the convergence properties for a class of phi-divergence distances. We … Read more

Calmness of linear programs under perturbations of all data: characterization and modulus

This paper provides operative point-based formulas (only involving the nominal data, and not data in a neighborhood) for computing or estimating the calmness modulus of the optimal set (argmin) mapping in linear optimization under uniqueness of nominal optimal solutions. Our analysis is developed in two different parametric settings. First, in the framework of canonical perturbations … Read more