Convergence Study on the Proximal Alternating Direction Method with Larger Step Size

The alternating direction method of multipliers (ADMM) is a popular method for the separable convex programming with linear constraints, and the proximal ADMM is its important variant. Previous studies show that the relaxation factor $\gamma\in (0, \frac{1+\sqrt{5}}{2})$ by Fortin and Glowinski for the ADMM is also valid for the proximal ADMM. In this paper, we … Read more

Computing Weighted Analytic Center for Linear Matrix Inequalities Using Infeasible Newton’s Method

We study the problem of computing weighted analytic center for system of linear matrix inequality constraints. The problem can be solved using the Standard Newton’s method. However, this approach requires that a starting point in the interior point of the feasible region be given or a Phase I problem be solved. We address the problem … Read more

On Nonconvex Decentralized Gradient Descent

Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for {convex} consensus optimization. However, to the behaviors or consensus \emph{nonconvex} optimization, our understanding is more limited. When we lose convexity, we cannot hope our algorithms always return global solutions though they sometimes still do sometimes. Somewhat surprisingly, … Read more

A Parameterized Proximal Point Algorithm for Separable Convex Optimization

In this paper, we develop a Parameterized Proximal Point Algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case $O(1/t)$ convergence rate, where $t$ is the iteration number. By properly choosing the algorithm parameters, numerical experiments … Read more

A TVSCAD approach for image deblurring with impulsive noise

We consider image deblurring problem in the presence of impulsive noise. It is known that \emph{total variation} (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both … Read more

Convergence rate bounds for a proximal ADMM with over-relaxation stepsize parameter for solving nonconvex linearly constrained problems

This paper establishes convergence rate bounds for a variant of the proximal alternating direction method of multipliers (ADMM) for solving nonconvex linearly constrained optimization problems. The variant of the proximal ADMM allows the inclusion of an over-relaxation stepsize parameter belonging to the interval (0,2). To the best of our knowledge, all related papers in the … Read more

Convex Optimization with ALADIN

This paper presents novel convergence results for the Augmented Lagrangian based Alternating Direction Inexact Newton method (ALADIN) in the context of distributed convex optimization. It is shown that ALADIN converges for a large class of convex optimization problems from any starting point to minimizers without needing line-search or other globalization routines. Under additional regularity assumptions, … Read more

Asynchronous Parallel Algorithms for Nonconvex Big-Data Optimization. Part I: Model and Convergence

We propose a novel asynchronous parallel algorithmic framework for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer, subject to both convex and nonconvex constraints. The proposed framework hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of modern computational architectures and asynchronous … Read more

Asynchronous Parallel Algorithms for Nonconvex Big-Data Optimization. Part II: Complexity and Numerical Results

We present complexity and numerical results for a new asynchronous parallel algorithmic method for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer, subject to both convex and nonconvex constraints. The proposed method hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of … Read more

Communication-Efficient Algorithms for Decentralized and Stochastic Optimization

We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. We first propose a decentralized primal-dual … Read more