Faster Lagrangian-based methods: a unified prediction-correction framework

Motivated by the prediction-correction framework constructed by He and Yuan [SIAM J. Numer. Anal. 50: 700-709, 2012], we propose a unified prediction-correction framework to accelerate Lagrangian-based methods. More precisely, for strongly convex optimization, general linearized Lagrangian method with indefinite proximal term, alternating direction method of multipliers (ADMM) with the step size of Lagrangian multiplier not … Read more

Global Complexity Bound of a Proximal ADMM for Linearly-Constrained Nonseperable Nonconvex Composite Programming

This paper proposes and analyzes a dampened proximal alternating direction method of multipliers (DP.ADMM) for solving linearly-constrained nonconvex optimization problems where the smooth part of the objective function is nonseparable. Each iteration of DP.ADMM consists of: (ii) a sequence of partial proximal augmented Lagrangian (AL) updates, (ii) an under-relaxed Lagrange multiplier update, and (iii) a … Read more

Inertial-relaxed splitting for composite monotone inclusions

In a similar spirit of the extension of the proximal point method developed by Alves et al. \cite{alvegm20}, we propose in this work an Inertial-Relaxed primal-dual splitting method to address the problem of decomposing the minimization of the sum of three convex functions, one of them being smooth, and considering a general coupling subspace. A … Read more

Dual descent ALM and ADMM

\(\) Classical primal-dual algorithms attempt to solve $\max_{\mu}\min_{x} \mathcal{L}(x,\mu)$ by alternatively minimizing over the primal variable $x$ through primal descent and maximizing the dual variable $\mu$ through dual ascent. However, when $\mathcal{L}(x,\mu)$ is highly nonconvex with complex constraints in $x$, the minimization over $x$ may not achieve global optimality, and hence the dual ascent step … Read more

A Framework of Inertial Alternating Direction Method of Multipliers for Non-Convex Non-Smooth Optimization

In this paper, we propose an algorithmic framework dubbed inertial alternating direction methods of multipliers (iADMM), for solving a class of nonconvex nonsmooth multiblock composite optimization problems with linear constraints. Our framework employs the general minimization-majorization (MM) principle to update each block of variables so as to not only unify the convergence analysis of previous … Read more

Decomposition Methods for Global Solutions of Mixed-Integer Linear Programs

This paper introduces two decomposition-based methods for two-block mixed-integer linear programs (MILPs), which aim to take advantage of separable structures of the original problem by solving a sequence of lower-dimensional MILPs. The first method is based on the $\ell_1$-augmented Lagrangian method (ALM), and the second one is based on a modified alternating direction method of … Read more

ADMM and inexact ALM: the QP case

Embedding randomization procedures in the Alternating Direction Method of Multipliers (ADMM) has recently attracted an increasing amount of interest as a remedy to the fact that the direct n-block generalization of ADMM is not necessarily convergent ($n \geq 3$). Even if, in practice, the introduction of such techniques could mitigate the diverging behaviour of the … Read more

A Distributed and Secure Algorithm for Computing Dominant SVD Based on Projection Splitting

In this paper, we propose and study a distributed and secure algorithm for computing dominant (or truncated) singular value decompositions (SVD) of large and distributed data matrices. We consider the scenario where each node privately holds a subset of columns and only exchanges “safe” information with other nodes in a collaborative effort to calculate a … Read more

Faster Lagrangian-Based Methods in Convex Optimization

In this paper, we aim at unifying, simplifying, and improving the convergence rate analysis of Lagrangian-based methods for convex optimization problems. We first introduce the notion of nice primal algorithmic map, which plays a central role in the unification and in the simplification of the analysis of all Lagrangian-based methods. Equipped with a nice primal … Read more

Minimization of L1 over L2 for sparse signal recovery with convergence guarantee

The ratio of the $L_1$ and $L_2$ norms, denoted by $L_1/L_2$, becomes attractive due to its scale-invariant property when approximating the $L_0$ norm to promote sparsity. In this paper, we incorporate the $L_1/L_2$ formalism into an unconstrained model in order to deal with both noiseless and noisy observations. To design an efficient algorithm, we derive … Read more