Regularized nonlinear acceleration

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the … Read more

Faster Alternating Direction Method of Multipliers with a Worst-case O(1/n^2) Convergence Rate

The alternating direction method of multipliers (ADMM) is being widely used for various convex programming models with separable structures arising in specifically many scientific computing areas. The ADMM’s worst-case O(1/n) convergence rate measured by the iteration complexity has been established in the literature when its penalty parameter is a constant, where n is the iteration … Read more

Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. … Read more

Acceleration and Stabilization Techniques for Column Generation Applied to Capacitated Resource Management Problems

This research presents a very efficient method of solving a broad class of large-scale capacitated resource management problems by introducing a new formulation and decomposition. A heuristic called Likelihood of Assignment is utilized not only to find high quality initial integer feasible solutions, but also to guide the Branch-and-Price (B&P) Algorithm towards stabilization. Although Column … Read more

On the acceleration of augmented Lagrangian method for linearly constrained optimization

The classical augmented Lagrangian method (ALM) plays a fundamental role in algorithmic development of constrained optimization. In this paper, we mainly show that Nesterov’s influential acceleration techniques can be applied to accelerate ALM, thus yielding an accelerated ALM whose iteration-complexity is O(1/k^2) for linearly constrained convex programming. As a by-product, we also show easily that … Read more

An accelerated inexact proximal point algorithm for convex minimization

The proximal point algorithm (PPA) is classical and popular in the community of Optimization. In practice, inexact PPAs which solves the involved proximal subproblems approximately subject to certain inexact criteria are truly implementable. In this paper, we first propose an inexact PPA with a new inexact criterion for solving convex minimization, and show that the … Read more