Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis

Despite the rich literature, the linear convergence of alternating direction method of multipliers (ADMM) has not been fully understood even for the convex case. For example, the linear convergence of ADMM can be empirically observed in a wide range of applications, while existing theoretical results seem to be too stringent to be satisfied or too … Read more

On the Linear Convergence of Difference-of-convex Algorithms for Nonsmooth DC Programming

In this paper we consider the linear convergence of algorithms for minimizing difference- of-convex functions with convex constraints. We allow nonsmoothness in both of the convex and concave components in the objective function, with a finite max structure in the concave compo- nent. Our focus is on algorithms that compute (weak and standard) d(irectional)-stationary points … Read more

The condition of a function relative to a polytope

The condition number of a smooth convex function, namely the ratio of its smoothness to strong convexity constants, is closely tied to fundamental properties of the function. In particular, the condition number of a quadratic convex function is precisely the square of the diameter-to-width ratio of a canonical ellipsoid associated to the function. Furthermore, the … Read more

Relaxing kink qualifications and proving convergence rates in piecewise smooth optimization

Abstract. In the paper [9] we derived first order (KKT) and second order (SSC) optimality conditions for functions defined by evaluation programs involving smooth elementals and absolute values. In that analysis, a key assumption on the local piecewise linearization was the Linear Independence Kink Qualification (LIKQ), a generalization of the Linear Independence Constraint Qualification (LICQ) … Read more

A One-Parameter Family of Middle Proximal ADMM for Constrained Separable Convex Optimization

This work is devoted to studying a family of Middle Proximal Alternating Direction Method of Multipliers (MP-ADM) for solving multi-block constrained separable convex optimization. Such one-parameter family of MP-ADM combines both Jacobian and Gauss-Seidel types of alternating direction method, and proximal point techniques are only applied to the middle subproblems to promote the convergence. We … Read more

On Glowinski’s Open Question of Alternating Direction Method of Multipliers

The alternating direction method of multipliers (ADMM) was proposed by Glowinski and Marrocco in 1975; and it has been widely used in a broad spectrum of areas, especially in some sparsity-driven application domains. In 1982, Fortin and Glowinski suggested to enlarge the range of the step size for updating the dual variable in ADMM from … Read more

Distributed Block-diagonal Approximation Methods for Regularized Empirical Risk Minimization

Designing distributed algorithms for empirical risk minimization (ERM) has become an active research topic in recent years because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model via solving its dual problem in parallel over multiple machines. Our method … Read more

Linear Convergence of Proximal Incremental Aggregated Gradient Methods under Quadratic Growth Condition

Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition{a strictly weaker condition than the strongly convex assumption, … Read more

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices … Read more

On the convergence of a regularized Jacobi algorithm for convex optimization

In this paper we consider the regularized version of the Jacobi algorithm, a block coordinate descent method for convex optimization with differentiable objective function and block-separable constraints that has been recently proposed in the literature. Under certain regularity assumptions on the objective function, this algorithm has been shown to satisfy the so-called sufficient decrease condition, … Read more