Indefinite linearized augmented Lagrangian method for convex programming with linear inequality constraints

The augmented Lagrangian method (ALM) is a benchmark for tackling the convex optimization problem with linear constraints; ALM and its variants for linearly equality-constrained convex minimization models have been well studied in the literatures. However, much less attention has been paid to ALM for efficiently solving the linearly inequality-constrained convex minimization model. In this paper, … Read more

Analysis of the BFGS Method with Errors

The classical convergence analysis of quasi-Newton methods assumes that the function and gradients employed at each iteration are exact. In this paper, we consider the case when there are (bounded) errors in both computations and establish conditions under which a slight modification of the BFGS algorithm with an Armijo-Wolfe line search converges to a neighborhood … Read more

Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems subject to convex constraints

Given a twice-continuously differentiable vector-valued function $r(x)$, a local minimizer of $\|r(x)\|_2$ within a convex set is sought. We propose and analyse tensor-Newton methods, in which $r(x)$ is replaced locally by its second-order Taylor approximation. Convergence is controlled by regularization of various orders. We establish global convergence to a constrained first-order critical point of $\|r(x)\|_2$, … Read more

Asymptotic results of Stochastic Decomposition for Two-stage Stochastic Quadratic Programming

This paper presents stochastic decomposition (SD) algorithms for two classes of stochastic programming problems: 1) two-stage stochastic quadratic-linear programming (SQLP) in which a quadratic program defines the objective function in the first stage and a linear program defines the value function in the second stage; 2) two-stage stochastic quadratic-quadratic programming (SQQP) which has quadratic programming … Read more

Block Coordinate Proximal Gradient Method for Nonconvex Optimization Problems: Convergence Analysis

We propose a block coordinate proximal gradient method for a composite minimization problem with two nonconvex function components in the objective while only one of them is assumed to be differentiable. Under some per-block Lipschitz-like conditions based on Bregman distance, but without the global Lipschitz continuity of the gradient of the differentiable function, we prove … Read more

Convergence Analysis of Sample Average Approximation of Two-stage Stochastic Generalized Equations

A solution of two-stage stochastic generalized equations is a pair: a first stage solution which is independent of realization of the random data and a second stage solution which is a function of random variables. This paper studies convergence of the sample average approximation of two-stage stochastic nonlinear generalized equations. In particular an exponential rate … Read more

The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates

We propose two numerical algorithms for minimizing the sum of a smooth function and the composition of a nonsmooth function with a linear operator in the fully nonconvex setting. The iterative schemes are formulated in the spirit of the proximal and, respectively, proximal linearized alternating direction method of multipliers. The proximal terms are introduced through … Read more

A projection algorithm based on KKT conditions for convex quadratic semidefinite programming with nonnegative constraints

The dual form of convex quadratic semidefinite programming (CQSDP) problem, with nonnegative constraints, is a 4-block separable convex optimization problem. It is known that,the directly extended 4-block alternating direction method of multipliers (ADMM4d) is very efficient to solve the dual, but its convergence is not guaranteed. In this paper, we reformulate the dual as a … Read more

Convergent Prediction-Correction-based ADMM for multi-block separable convex programming

The direct extension of the classic alternating direction method with multipliers (ADMMe) to the multi-block separable convex optimization problem is not necessarily convergent, though it often performs very well in practice. In order to preserve the numerical advantages of ADMMe and obtain convergence, many modified ADMM were proposed by correcting the output of ADMMe or … Read more

On the Optimal Proximal Parameter of an ADMM-like Splitting Method for Separable Convex Programming

An ADMM-based splitting method is proposed in [11] for solving convex minimization problems with linear constraints and multi-block separable objective functions; while a relatively large proximal parameter is required for theoretically ensuring the convergence. In this paper, we further study this method and find its optimal (smallest) proximal parameter. For succinctness, we focus on the … Read more