Optimal Primal-Dual Methods for a Class of Saddle Point Problems

We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPP). The basic idea of this algorithm is to incorporate a multi-step acceleration scheme into the primal-dual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence … Read more

Faster, but Weaker, Relaxations for Quadratically Constrained Quadratic Programs

We introduce a new relaxation framework for nonconvex quadratically constrained quadratic programs (QCQPs). In contrast to existing relaxations based on semidefinite programming (SDP), our relaxations incorporate features of both SDP and second order cone programming (SOCP) and, as a result, solve more quickly than SDP. A downside is that the calculated bounds are weaker than … Read more

Tail bounds for stochastic approximation

Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each … Read more

On the use of semi-closed sets and functions in convex analysis

The main aim of this short note is to show that the sub\-differentiability and duality results established by Laghdir (2005), Laghdir and Benabbou (2007), and Alimohammady \emph{et al.}\ (2011), stated in Fréchet spaces, are consequences of the corresponding known results using Moreau–Rockafellar type conditions. Article Download View On the use of semi-closed sets and functions … Read more

A splitting minimization method on geodesic spaces

We present in this paper the alternating minimization method on CAT(0) spaces for solving unconstraints convex optimization problems. Under the assumption that the function being minimize is convex, we prove that the sequence generated by our algorithm converges to a minimize point. The results presented in this paper are the first ones of this type … Read more

Projection: A Unified Approach to Semi-Infinite Linear Programs and Duality in Convex Programming

Fourier-Motzkin elimination is a projection algorithm for solving finite linear programs. We extend Fourier-Motzkin elimination to semi-infinite linear programs which are linear programs with finitely many variables and infinitely many constraints. Applying projection leads to new characterizations of important properties for primal-dual pairs of semi-infinite programs such as zero duality gap, feasibility, boundedness, and solvability. … Read more

A Perturbed Sums of Squares Theorem for Polynomial Optimization and its Applications

We consider a property of positive polynomials on a compact set with a small perturbation. When applied to a Polynomial Optimization Problem (POP), the property implies that the optimal value of the corresponding SemiDefinite Programming (SDP) relaxation with sufficiently large relaxation order is bounded from below by $(f^¥ast – ¥epsilon)$ and from above by $f^¥ast … Read more

Gradient methods for convex minimization: better rates under weaker conditions

The convergence behavior of gradient methods for minimizing convex differentiable functions is one of the core questions in convex optimization. This paper shows that their well-known complexities can be achieved under conditions weaker than the commonly assumed ones. We relax the common gradient Lipschitz-continuity condition and strong convexity condition to ones that hold only over … Read more

On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems

We present two modified versions of the primal-dual splitting algorithm relying on forward-backward splitting proposed in [21] for solving monotone inclusion problems. Under strong monotonicity assumptions for some of the operators involved we obtain for the sequences of iterates that approach the solution orders of convergence of ${\cal {O}}(\frac{1}{n})$ and ${\cal {O}}(\omega^n)$, for $\omega \in … Read more

A generalization of the Lowner-John’s ellipsoid theorem

We address the following generalization $P$ of the Lowner-John’s ellipsoid problem. Given a (non necessarily convex) compact set $K\subset R^n$ and an even integer $d, find an homogeneous polynomial $g$ of degree $d$ such that $K\subset G:=\{x:g(x)\leq1\}$ and $G$ has minimum volume among all such sets. We show that $P$ is a convex optimization problem … Read more