Alternating Direction Method with Gaussian Back Substitution for Separable Convex Programming

We consider the linearly constrained separable convex programming whose objective function is separable into m individual convex functions without crossed variables. The alternating direction method (ADM) has been well studied in the literature for the special case m=2. But the convergence of extending ADM to the general case m>=3 is still open. In this paper, … Read more

Inexact Dynamic Bundle Methods

We give a proximal bundle method for minimizing a convex function $f$ over $\mathbb{R}_+^n$. It requires evaluating $f$ and its subgradients with a possibly unknown accuracy $\epsilon\ge0$, and maintains a set of free variables $I$ to simplify its prox subproblems. The method asymptotically finds points that are $\epsilon$-optimal. In Lagrangian relaxation of convex programs, it … Read more

A Monotone+Skew Splitting Model for Composite Monotone Inclusions in Duality

The principle underlying this paper is the basic observation that the problem of simultaneously solving a large class of composite monotone inclusions and their duals can be reduced to that of finding a zero of the sum of a maximally monotone operator and a linear skew-adjoint operator. An algorithmic framework is developed for solving this … Read more

A Parallel Inertial Proximal Optimization Method

The Douglas-Rachford algorithm is a popular iterative method for finding a zero of a sum of two maximal monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of … Read more

A contraction method with implementable proximal regularization for linearly constrained convex programming

The proximal point algorithm (PPA) is classical, and it is implicit in the sense that the resulting proximal subproblems may be as difficult as the original problem. In this paper, we show that with appropriate choices of proximal parameters, the application of PPA to the linearly constrained convex programming can result in easy proximal subproblems. … Read more

Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

The adaptive cubic regularization algorithms described in Cartis, Gould & Toint (2009, 2010) for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy … Read more

Bundle-type methods uniformly optimal for smooth and nonsmooth convex optimization

The bundle-level method and their certain variants are known to exhibit an optimal rate of convergence, i.e., ${\cal O}(1/\sqrt{t})$, and also excellent practical performance for solving general non-smooth convex programming (CP) problems. However, this rate of convergence is significantly worse than the optimal one for solving smooth CP problems, i.e., ${\cal O}(1/t^2)$. In this paper, … Read more

On the acceleration of augmented Lagrangian method for linearly constrained optimization

The classical augmented Lagrangian method (ALM) plays a fundamental role in algorithmic development of constrained optimization. In this paper, we mainly show that Nesterov’s influential acceleration techniques can be applied to accelerate ALM, thus yielding an accelerated ALM whose iteration-complexity is O(1/k^2) for linearly constrained convex programming. As a by-product, we also show easily that … Read more

An accelerated inexact proximal point algorithm for convex minimization

The proximal point algorithm (PPA) is classical and popular in the community of Optimization. In practice, inexact PPAs which solves the involved proximal subproblems approximately subject to certain inexact criteria are truly implementable. In this paper, we first propose an inexact PPA with a new inexact criterion for solving convex minimization, and show that the … Read more

Information-theoretic lower bounds on the oracle complexity of convex optimization

Relative to the large literature on upper bounds on complexity of convex optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of convex optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic … Read more