A Parallel Inertial Proximal Optimization Method

The Douglas-Rachford algorithm is a popular iterative method for finding a zero of a sum of two maximal monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of … Read more

On the acceleration of augmented Lagrangian method for linearly constrained optimization

The classical augmented Lagrangian method (ALM) plays a fundamental role in algorithmic development of constrained optimization. In this paper, we mainly show that Nesterov’s influential acceleration techniques can be applied to accelerate ALM, thus yielding an accelerated ALM whose iteration-complexity is O(1/k^2) for linearly constrained convex programming. As a by-product, we also show easily that … Read more

Iteration-complexity of block-decomposition algorithms and the alternating minimization augmented Lagrangian method

In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each one of the single-block proximal subproblems to be solved in an … Read more

A Practical Relative Error Criterion for Augmented Lagrangians

This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems. This criterion is practical in the sense that it requires only information that is ordinarily readily available, such as the gradient (or a subgradient) of the augmented Lagrangian. It is also “relative” in the sense of relative error criteria for … Read more

A splitting method for separate convex programming with linking linear constraints

We consider the separate convex programming problem with linking linear constraints, where the objective function is in the form of the sum of m individual functions without crossed variables. The special case with m=2 has been well studied in the literature and some algorithms are very influential, e.g. the alternating direction method. The research for … Read more

Derivative-free methods for nonlinear programming with general lower-level constraints

Augmented Lagrangian methods for derivative-free continuous optimization with constraints are introduced in this paper. The algorithms inherit the convergence results obtained by Andreani, Birgin, Martínez and Schuverdt for the case in which analytic derivatives exist and are available. In particular, feasible limit points satisfy KKT conditions under the Constant Positive Linear Dependence (CPLD) constraint qualification. … Read more

Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization

At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, … Read more

A Unified Approach for Minimizing Composite Norms

We propose a first-order augmented Lagrangian algorithm (FALC) to solve the composite norm minimization problem min |sigma(F(X)-G)|_alpha + |C(X)- d|_beta subject to A(X)-b in Q; where sigma(X) denotes the vector of singular values of X, the matrix norm |sigma(X)|_alpha denotes either the Frobenius, the nuclear, or the L2-operator norm of X, the vector norm |.|_beta … Read more

A First-Order Augmented Lagrangian Method for Compressed Sensing

We propose a First-order Augmented Lagrangian algorithm (FAL) for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of L1-regularized least squares sub-problems. These sub-problems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to “shrinkage” or constrained “shrinkage”. We show that FAL … Read more

Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization

The nuclear norm is widely used to induce low-rank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to … Read more