A Perturbed Sums of Squares Theorem for Polynomial Optimization and its Applications

We consider a property of positive polynomials on a compact set with a small perturbation. When applied to a Polynomial Optimization Problem (POP), the property implies that the optimal value of the corresponding SemiDefinite Programming (SDP) relaxation with sufficiently large relaxation order is bounded from below by $(f^¥ast – ¥epsilon)$ and from above by $f^¥ast … Read more

Gradient methods for convex minimization: better rates under weaker conditions

The convergence behavior of gradient methods for minimizing convex differentiable functions is one of the core questions in convex optimization. This paper shows that their well-known complexities can be achieved under conditions weaker than the commonly assumed ones. We relax the common gradient Lipschitz-continuity condition and strong convexity condition to ones that hold only over … Read more

Smoothness Properties of a Regularized Gap Function for Quasi-Variational Inequalities

This article studies continuity and differentiability properties for a reformulation of a finite-dimensional quasi-variational inequality (QVI) problem using a regularized gap function approach. For a special class of QVIs, this gap function is continuously differentiable everywhere, in general, however, it has nondifferentiability points. We therefore take a closer look at these nondifferentiability points and show, … Read more

On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems

We present two modified versions of the primal-dual splitting algorithm relying on forward-backward splitting proposed in [21] for solving monotone inclusion problems. Under strong monotonicity assumptions for some of the operators involved we obtain for the sequences of iterates that approach the solution orders of convergence of ${\cal {O}}(\frac{1}{n})$ and ${\cal {O}}(\omega^n)$, for $\omega \in … Read more

An Augmented Lagrangian Method for Conic Convex Programming

We propose a new first-order augmented Lagrangian algorithm ALCC for solving convex conic programs of the form min{rho(x)+gamma(x): Ax-b in K, x in chi}, where rho and gamma are closed convex functions, and gamma has a Lipschitz continuous gradient, A is mxn real matrix, K is a closed convex cone, and chi is a “simple” … Read more

A generalization of the Lowner-John’s ellipsoid theorem

We address the following generalization $P$ of the Lowner-John’s ellipsoid problem. Given a (non necessarily convex) compact set $K\subset R^n$ and an even integer $d, find an homogeneous polynomial $g$ of degree $d$ such that $K\subset G:=\{x:g(x)\leq1\}$ and $G$ has minimum volume among all such sets. We show that $P$ is a convex optimization problem … Read more

Strong duality in conic linear programming: facial reduction and extended duals

The facial reduction algorithm of Borwein and Wolkowicz and the extended dual of Ramana provide a strong dual for the conic linear program (P) \sup { | Ax \leq_K b} in the absence of any constraint qualification. The facial reduction algorithm solves a sequence of auxiliary optimization problems to obtain such a dual. Ramana’s dual … Read more

Abstract Newtonian Frameworks and Their Applications

We unify and extend some Newtonian iterative frameworks developed earlier in the literature, which results in a collection of convenient tools for local convergence analysis of various algorithms under various sets of assumptions including strong metric regularity, semistability, or upper-Lipschizt stability, the latter allowing for nonisolated solutions. These abstract schemes are further applied for deriving … Read more

Proximal bundle methods in depth: a unified analysis for inexact oracles

The last few years have seen the advent ofa new generation of bundle methods, capable to handle inexact oracles, polluted by “noise”. Proving convergence of a bundle method is never simple and coping with inexact oracles substantially increases the technicalities. Besides, several variants exist to deal with noise, each one needing an ad hoc proof … Read more

Distributionally Robust Convex Optimization

Distributionally robust optimization is a paradigm for decision-making under uncertainty where the uncertain problem data is governed by a probability distribution that is itself subject to uncertainty. The distribution is then assumed to belong to an ambiguity set comprising all distributions that are compatible with the decision maker’s prior information. In this paper, we propose … Read more