Iteration Complexity of Randomized Block-Coordinate Descent Methods for Minimizing a Composite Function

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O(\tfrac{n}{\epsilon} \log \tfrac{1}{\rho})$ iterations, where $n$ is the number of blocks. For strongly convex functions … Read more

How to generate weakly infeasible semidefinite programs via Lasserre’s relaxations for polynomial optimization

Examples of weakly infeasible semidefinite programs are useful to test whether semidefinite solvers can detect infeasibility. However, finding non trivial such examples is notoriously difficult. This note shows how to use Lasserre’s semidefinite programming relaxations for polynomial optimization in order to generate examples of weakly infeasible semidefinite programs. Such examples could be used to test … Read more

Implementing the simplex method as a cutting-plane method

We show that the simplex method can be interpreted as a cutting-plane method, assumed that a special pricing rule is used. This approach is motivated by the recent success of the cutting-plane method in the solution of special stochastic programming problems. We compare the classic Dantzig pricing rule and the rule that derives from the … Read more

On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets

In the paper we prove that any nonconvex quadratic problem over some set $K\subset \mathbb{R}^n$ with additional linear and binary constraints can be rewritten as linear problem over the cone, dual to the cone of K-semidefinite matrices. We show that when K is defined by one quadratic constraint or by one concave quadratic constraint and … Read more

Robust solutions of optimization problems affected by uncertain probabilities

In this paper we focus on robust linear optimization problems with uncertainty regions defined by phi-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on phi-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization … Read more

Accelerated Linearized Bregman Method

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB … Read more

A lower bound on the optimal self-concordance parameter of convex cones

Let $K \subset \mathbb R^n$ be a regular convex cone, let $e_1,\dots,e_n \in \partial K$ be linearly independent points on the boundary of a compact affine section of the cone, and let $x^* \in K^o$ be a point in the relative interior of this section. For $k = 1,\dots,n$, let $l_k$ be the line through … Read more

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

We consider bilevel pseudomonotone equilibrium problems. We use a penalty function to convert a bilevel problem into one-level ones. We generalize a pseudo $\nabla$-monotonicity concept from $\nabla$-monotonicity and prove that under pseudo $\nabla$-monotonicity property any stationary point of a regularized gap function is a solution of the penalized equilibrium problem. As an application, we discuss … Read more

Sufficient Conditions for Low-rank Matrix Recovery,Translated from Sparse Signal Recovery

The low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, system identification and control. This class of optimization problems is $NP$-hard and a popular approach replaces the rank function with the nuclear norm of the … Read more

AN OPTIMAL ALGORITHM FOR CONSTRAINED DIFFERENTIABLE CONVEX OPTIMIZATION

We describe three algorithms for solving differentiable convex optimization problems constrained to simple sets in $ \R^n $, i.e., sets on which it is easy to project an arbitrary point. The first two algorithms are optimal in the sense that they achieve an absolute precision of $ \varepsilon $ in relation to the optimal value … Read more