Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems subject to convex constraints

Given a twice-continuously differentiable vector-valued function $r(x)$, a local minimizer of $\|r(x)\|_2$ within a convex set is sought. We propose and analyse tensor-Newton methods, in which $r(x)$ is replaced locally by its second-order Taylor approximation. Convergence is controlled by regularization of various orders. We establish global convergence to a constrained first-order critical point of $\|r(x)\|_2$, … Read more

Asymptotic results of Stochastic Decomposition for Two-stage Stochastic Quadratic Programming

This paper presents stochastic decomposition (SD) algorithms for two classes of stochastic programming problems: 1) two-stage stochastic quadratic-linear programming (SQLP) in which a quadratic program defines the objective function in the first stage and a linear program defines the value function in the second stage; 2) two-stage stochastic quadratic-quadratic programming (SQQP) which has quadratic programming … Read more

Block Coordinate Proximal Gradient Method for Nonconvex Optimization Problems: Convergence Analysis

We propose a block coordinate proximal gradient method for a composite minimization problem with two nonconvex function components in the objective while only one of them is assumed to be differentiable. Under some per-block Lipschitz-like conditions based on Bregman distance, but without the global Lipschitz continuity of the gradient of the differentiable function, we prove … Read more

Convergence Analysis of Sample Average Approximation of Two-stage Stochastic Generalized Equations

A solution of two-stage stochastic generalized equations is a pair: a first stage solution which is independent of realization of the random data and a second stage solution which is a function of random variables. This paper studies convergence of the sample average approximation of two-stage stochastic nonlinear generalized equations. In particular an exponential rate … Read more

The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates

We propose two numerical algorithms for minimizing the sum of a smooth function and the composition of a nonsmooth function with a linear operator in the fully nonconvex setting. The iterative schemes are formulated in the spirit of the proximal and, respectively, proximal linearized alternating direction method of multipliers. The proximal terms are introduced through … Read more

A projection algorithm based on KKT conditions for convex quadratic semidefinite programming with nonnegative constraints

The dual form of convex quadratic semidefinite programming (CQSDP) problem, with nonnegative constraints, is a 4-block separable convex optimization problem. It is known that,the directly extended 4-block alternating direction method of multipliers (ADMM4d) is very efficient to solve the dual, but its convergence is not guaranteed. In this paper, we reformulate the dual as a … Read more

Convergent Prediction-Correction-based ADMM for multi-block separable convex programming

The direct extension of the classic alternating direction method with multipliers (ADMMe) to the multi-block separable convex optimization problem is not necessarily convergent, though it often performs very well in practice. In order to preserve the numerical advantages of ADMMe and obtain convergence, many modified ADMM were proposed by correcting the output of ADMMe or … Read more

On the Optimal Proximal Parameter of an ADMM-like Splitting Method for Separable Convex Programming

An ADMM-based splitting method is proposed in [11] for solving convex minimization problems with linear constraints and multi-block separable objective functions; while a relatively large proximal parameter is required for theoretically ensuring the convergence. In this paper, we further study this method and find its optimal (smallest) proximal parameter. For succinctness, we focus on the … Read more

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search … Read more

A Novel Approach for Solving Convex Problems with Cardinality Constraints

In this paper we consider the problem of minimizing a convex differentiable function subject to sparsity constraints. Such constraints are non-convex and the resulting optimization problem is known to be hard to solve. We propose a novel generalization of this problem and demonstrate that it is equivalent to the original sparsity-constrained problem if a certain … Read more