On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators

Recently, Nemirovski’s analysis indicates that the extragradient method has the $O(1/t)$ convergence rate for variational inequalities with Lipschitz continuous monotone operators. For the same problems, in the last decades, we have developed a class of Fej\’er monotone projection and contraction methods. Until now, only convergence results are available to these projection and contraction methods, though … Read more

Approximation of rank function and its application to the nearest low-rank correlation matrix

The rank function $\rank(\cdot)$ is neither continuous nor convex which brings much difficulty to the solution of rank minimization problems. In this paper, we provide a unified framework to construct the approximation functions of $\rank(\cdot)$, and study their favorable properties. Particularly, with two families of approximation functions, we propose a convex relaxation method for the … Read more

Fast First-Order Methods for Stable Principal Component Pursuit

The stable principal component pursuit (SPCP) problem is a non-smooth convex optimization problem, the solution of which has been shown both in theory and in practice to enable one to recover the low rank and sparse components of a matrix whose elements have been corrupted by Gaussian noise. In this paper, we first show how … Read more

The mesh adaptive direct search algorithm with treed Gaussian process surrogates

This work introduces the use of the treed Gaussian process (TGP) as a surrogate model within the mesh adaptive direct search (MADS) framework for constrained blackbox optimization. It extends the surrogate management framework (SMF) to nonsmooth optimization under general constraints. MADS uses TGP in two ways: one, as a surrogate for blackbox evaluations; and two, … Read more

Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions … Read more

How to generate weakly infeasible semidefinite programs via Lasserre’s relaxations for polynomial optimization

Examples of weakly infeasible semidefinite programs are useful to test whether semidefinite solvers can detect infeasibility. However, finding non trivial such examples is notoriously difficult. This note shows how to use Lasserre’s semidefinite programming relaxations for polynomial optimization in order to generate examples of weakly infeasible semidefinite programs. Such examples could be used to test … Read more

Implementing the simplex method as a cutting-plane method

We show that the simplex method can be interpreted as a cutting-plane method, assumed that a special pricing rule is used. This approach is motivated by the recent success of the cutting-plane method in the solution of special stochastic programming problems. We compare the classic Dantzig pricing rule and the rule that derives from the … Read more

On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets

In the paper we prove that any nonconvex quadratic problem over some set $K\subset \mathbb{R}^n$ with additional linear and binary constraints can be rewritten as linear problem over the cone, dual to the cone of K-semidefinite matrices. We show that when K is defined by one quadratic constraint or by one concave quadratic constraint and … Read more

Robust solutions of optimization problems affected by uncertain probabilities

In this paper we focus on robust linear optimization problems with uncertainty regions defined by phi-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on phi-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization … Read more

Accelerated Linearized Bregman Method

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB … Read more