Error Bound and Isocost Imply Linear Convergence of DCA-based Algorithms to D-stationarity

We consider a class of structured nonsmooth difference-of-convex minimization, which can be written as the difference of two convex functions possibly nonsmooth with the second one in the format of the maximum of a finite convex smooth functions. We propose two extrapolation proximal difference-of-convex based algorithms for potential acceleration to converge to a weak/standard d-stationary … Read more

On the Linear Convergence to Weak/Standard D-stationary Points of DCA-based Algorithms for Structured Nonsmooth DC Programming

We consider a class of structured nonsmooth difference-of-convex minimization. We allow nonsmoothness in both the convex and concave components in the objective function, with a finite max structure in the concave part. Our focus is on algorithms that compute a (weak or standard) d(irectional)-stationary point as advocated in a recent work of Pang et al. … Read more

Minimization of L1 over L2 for sparse signal recovery with convergence guarantee

The ratio of the $L_1$ and $L_2$ norms, denoted by $L_1/L_2$, becomes attractive due to its scale-invariant property when approximating the $L_0$ norm to promote sparsity. In this paper, we incorporate the $L_1/L_2$ formalism into an unconstrained model in order to deal with both noiseless and noisy observations. To design an efficient algorithm, we derive … Read more

On the Linear Convergence of Difference-of-convex Algorithms for Nonsmooth DC Programming

In this paper we consider the linear convergence of algorithms for minimizing difference- of-convex functions with convex constraints. We allow nonsmoothness in both of the convex and concave components in the objective function, with a finite max structure in the concave compo- nent. Our focus is on algorithms that compute (weak and standard) d(irectional)-stationary points … Read more

On Glowinski’s Open Question of Alternating Direction Method of Multipliers

The alternating direction method of multipliers (ADMM) was proposed by Glowinski and Marrocco in 1975; and it has been widely used in a broad spectrum of areas, especially in some sparsity-driven application domains. In 1982, Fortin and Glowinski suggested to enlarge the range of the step size for updating the dual variable in ADMM from … Read more

Convergence rate and iteration complexity on the alternating direction method of multipliers with a substitution procedure for separable convex programming

Recently, in [17] we have showed the first possibility of combining the Douglas-Rachford alternating direction method of multipliers (ADMM) with a Gaussian back substitution procedure for solving a convex minimization model with a general separable structure. This paper is a further study on theoretical aspects of this theme. We first derive a general algorithmic framework … Read more

Alternating Direction Method with Gaussian Back Substitution for Separable Convex Programming

We consider the linearly constrained separable convex programming whose objective function is separable into m individual convex functions without crossed variables. The alternating direction method (ADM) has been well studied in the literature for the special case m=2. But the convergence of extending ADM to the general case m>=3 is still open. In this paper, … Read more

A splitting method for separate convex programming with linking linear constraints

We consider the separate convex programming problem with linking linear constraints, where the objective function is in the form of the sum of m individual functions without crossed variables. The special case with m=2 has been well studied in the literature and some algorithms are very influential, e.g. the alternating direction method. The research for … Read more

Recovering low-rank and sparse components of matrices from incomplete and noisy observations

Many applications arising in a variety of fields can be well illustrated by the task of recovering the low-rank and sparse components of a given matrix. Recently, it is discovered that this NP-hard task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely-acknowledged nuclear norm and … Read more

Alternating direction algorithms for total variation deconvolution in image reconstruction

Image restoration and reconstruction from blurry and noisy observation is known to be ill-posed. To stabilize the recovery, total variation (TV) regularization was introduced by Rudin, Osher and Fatemi in \cite{LIR92}, which has demonstrated superiority in preserving image edges. However, the nondifferentiability of TV makes the underlying optimization problems difficult to solve. In this paper, … Read more