A globally and linearly convergent PGM for zero-norm regularized quadratic optimization with sphere constraint

This paper is concerned with the zero-norm regularized quadratic optimization with a sphere constraint, which has an important application in sparse eigenvalue problems. For this class of nonconvex and nonsmooth optimization problems, we establish the KL property of exponent 1/2 for its extended-valued objective function and develop a globally and linearly convergent proximal gradient method … Read more

GEP-MSCRA for computing the group zero-norm regularized least squares estimator

This paper concerns with the group zero-norm regularized least squares estimator which, in terms of the variational characterization of the zero-norm, can be obtained from a mathematical program with equilibrium constraints (MPEC). By developing the global exact penalty for the MPEC, this estimator is shown to arise from an exact penalization problem that not only … Read more

Error bounds for rank constrained optimization problems and applications

This paper is concerned with the rank constrained optimization problem whose feasible set is the intersection of the rank constraint set $\mathcal{R}=\!\big\{X\in\mathbb{X}\ |\ {\rm rank}(X)\le \kappa\big\}$ and a closed convex set $\Omega$. We establish the local (global) Lipschitzian type error bounds for estimating the distance from any $X\in \Omega$ ($X\in\mathbb{X}$) to the feasible set and … Read more

Exact penalty decomposition method for zero-norm minimization based on MPEC formulation

We reformulate the zero-norm minimization problem as an equivalent mathematical program with equilibrium constraints and establish that its penalty problem, induced by adding the complementarity constraint to the objective, is exact. Then, by the special structure of the exact penalty problem, we propose a decomposition method that can seek a global optimal solution of the … Read more

A corrected semi-proximal ADMM for multi-block convex optimization and its application to DNN-SDPs

In this paper we propose a corrected semi-proximal ADMM (alternating direction method of multipliers) for the general $p$-block $(p\!\ge 3)$ convex optimization problems with linear constraints, aiming to resolve the dilemma that almost all the existing modified versions of the directly extended ADMM, although with convergent guarantee, often perform substantially worse than the directly extended … Read more

Approximation of rank function and its application to the nearest low-rank correlation matrix

The rank function $\rank(\cdot)$ is neither continuous nor convex which brings much difficulty to the solution of rank minimization problems. In this paper, we provide a unified framework to construct the approximation functions of $\rank(\cdot)$, and study their favorable properties. Particularly, with two families of approximation functions, we propose a convex relaxation method for the … Read more