Block-wise Alternating Direction Method of Multipliers with Gaussian Back Substitution for Multiple-block Convex Programming

We consider the linearly constrained convex minimization model with a separable objective function which is the sum of m functions without coupled variables, and discuss how to design an efficient algorithm based on the fundamental technique of splitting the augmented Lagrangian method (ALM). Our focus is the specific big-data scenario where m is huge. A … Read more

Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An … Read more

On the iterate convergence of descent methods for convex optimization

We study the iterate convergence of strong descent algorithms applied to convex functions. We assume that the function satisfies a very simple growth condition around its minimizers, and then show that the trajectory described by the iterates generated by any such method has finite length, which proves that the sequence of iterates converge. CitationFederal University … Read more

Inertial primal-dual algorithms for structured convex optimization

The primal-dual algorithm recently proposed by Chambolle \& Pock (abbreviated as CPA) for structured convex optimization is very efficient and popular. It was shown by Chambolle \& Pock in \cite{CP11} and also by Shefi \& Teboulle in \cite{ST14} that CPA and variants are closely related to preconditioned versions of the popular alternating direction method of … Read more

On the ergodic convergence rates of a first-order primal-dual algorithm

We revisit the proofs of convergence for a first order primal-dual algorithm for convex optimization which we have studied a few years ago. In particular, we prove rates of convergence for a more general version, with simpler proofs and more complete results. ArticleDownload View PDF

Normally admissible stratifications and calculation of normal cones to a finite union of polyhedral sets

This paper considers computation of Fr\’echet and limiting normal cones to a finite union of polyhedra. To this aim, we introduce a new concept of normally admissible stratification which is convenient for calculations of such cones and provide its basic properties. We further derive formulas for the above mentioned cones and compare our approach to … Read more

On a new class of matrix support functionals with applications

A new class of matrix support functionals is presented which establish a connection between optimal value functions for quadratic optimization problems, the matrix-fractional function, the pseudo matrix-fractional function, and the nuclear norm. The support function is based on the graph of the product of a matrix with its transpose. Closed form expressions for the support … Read more

Matrix monotonicity and self-concordance:how to handle quantum entropy in optimization problems

Let $g$ be a continuously differentiable function whose derivative is matrix monotone on positive semi-axis. Such a function induces a function $\phi (x)=tr(g(x))$ on the cone of squares of an arbitrary Euclidean Jordan algebra. We show that $\phi (x) -\ln \det(x)$ is a self-concordant function on the interior of the cone. We also show that … Read more

How the augmented Lagrangian algorithm can deal with an infeasible convex quadratic optimization problem

This paper analyses the behavior of the augmented Lagrangian algorithm when it deals with an infeasible convex quadratic optimization problem. It is shown that the algorithm finds a point that, on the one hand, satisfies the constraints shifted by the smallest possible shift that makes them feasible and, on the other hand, minimizes the objective … Read more

On the Sublinear Convergence Rate of Multi-Block ADMM

The alternating direction method of multipliers (ADMM) is widely used in solving structured convex optimization problems. Despite of its success in practice, the convergence of the standard ADMM for minimizing the sum of $N$ $(N\geq 3)$ convex functions whose variables are linked by linear constraints, has remained unclear for a very long time. Recently, Chen … Read more