On the iterate convergence of descent methods for convex optimization

We study the iterate convergence of strong descent algorithms applied to convex functions. We assume that the function satisfies a very simple growth condition around its minimizers, and then show that the trajectory described by the iterates generated by any such method has finite length, which proves that the sequence of iterates converge. CitationFederal University … Read more

Matrix monotonicity and self-concordance:how to handle quantum entropy in optimization problems

Let $g$ be a continuously differentiable function whose derivative is matrix monotone on positive semi-axis. Such a function induces a function $\phi (x)=tr(g(x))$ on the cone of squares of an arbitrary Euclidean Jordan algebra. We show that $\phi (x) -\ln \det(x)$ is a self-concordant function on the interior of the cone. We also show that … Read more

Convergence analysis of sampling-based decomposition methods for risk-averse multistage stochastic convex programs

We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. … Read more

Block-wise Alternating Direction Method of Multipliers for Multiple-block Convex Programming and Beyond

The alternating direction method of multipliers (ADMM) is a benchmark for solving a linearly constrained convex minimization model with a two-block separable objective function; and it has been shown that its direct extension to a multiple-block case where the objective function is the sum of more than two functions is not necessarily convergent. For the … Read more

Self Equivalence of the Alternating Direction Method of Multipliers

The alternating direction method of multipliers (ADM or ADMM) breaks a complex optimization problem into much simpler subproblems. The ADM algorithms are typically short and easy to implement yet exhibit (nearly) state-of-the-art performance for large-scale optimization problems. To apply ADM, we first formulate a given problem into the “ADM-ready” form, so the final algorithm depends … Read more

Hypotheses testing on the optimal values of several risk-neutral or risk-averse convex stochastic programs and application to hypotheses testing on several risk measure values

Given an arbitrary number of risk-averse or risk-neutral convex stochastic programs, we study hypotheses testing problems aiming at comparing the optimal values of these stochastic programs on the basis of samples of the underlying random vectors. We propose non-asymptotic tests based on confidence intervals on the optimal values of the stochastic programs obtained using the … Read more

Local Linear Convergence of Forward–Backward under Partial Smoothness

In this paper, we consider the Forward–Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz–continuous gradient and the other being partly smooth relatively to an active manifold $\mathcal{M}$. We propose a unified framework in which we show that the Forward–Backward (i) correctly identifies the … Read more

Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions

Splitting schemes are a class of powerful algorithms that solve complicated monotone inclusion and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which the simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable algorithms, which often obtain nearly state-of-the-art performance. … Read more

A Class of Randomized Primal-Dual Algorithms for Distributed Optimization

Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in [Combettes,Pesquet,2014], several variants of block-coordinate primal-dual algorithms are designed in order to solve a wide array of monotone inclusion problems. These methods rely on a sweep of blocks of variables which are activated at each iteration according to a random rule, … Read more

Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however … Read more