COMPUTATIONAL COMPLEXITY OF INEXACT GRADIENT AUGMENTED LAGRANGIAN METHODS: APPLICATION TO CONSTRAINED MPC

We study the computational complexity certification of inexact gradient augmented Lagrangian methods for solving convex optimization problems with complicated constraints. We solve the augmented Lagrangian dual problem that arises from the relaxation of complicating constraints with gradient and fast gradient methods based on inexact first order information. Moreover, since the exact solution of the augmented … Read more

Convergence rate and iteration complexity on the alternating direction method of multipliers with a substitution procedure for separable convex programming

Recently, in [17] we have showed the first possibility of combining the Douglas-Rachford alternating direction method of multipliers (ADMM) with a Gaussian back substitution procedure for solving a convex minimization model with a general separable structure. This paper is a further study on theoretical aspects of this theme. We first derive a general algorithmic framework … Read more

Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions … Read more

A Simple Variant of the Mizuno-Todd-Ye Predictor-Corrector Algorithm and its Objective-Function-Free Complexity

In this paper, we propose a simple variant of the Mizuno-Todd-Ye predictor-corrector algorithm for linear programming problem (LP). Our variant executes a natural finite termination procedure at each iteration and it is easy to implement the algorithm. Our algorithm admits an objective-function free polynomial-time complexity when it is applied to LPs whose dual feasible region … Read more

Enlarging Neighborhoods of Interior-Point Algorithms for Linear Programming via Least Values of Proximity measure Functions

It is well known that a wide-neighborhood interior-point algorithm for linear programming performs much better in implementation than those small-neighborhood counterparts. In this paper, we provide a unified way to enlarge the neighborhoods of predictor-corrector interior-point algorithms for linear programming. We prove that our methods not only enlarge the neighborhoods but also retain the so-far … Read more

Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems

We consider an augmented Lagrangian algorithm for minimizing a convex quadratic function subject to linear inequality constraints. Linear optimization is an important particular instance of this problem. We show that, provided the augmentation parameter is large enough, the constraint value converges {\em globally\/} linearly to zero. This property is viewed as a consequence of the … Read more