Orthogonal invariance and identifiability

Orthogonally invariant functions of symmetric matrices often inherit properties from their diagonal restrictions: von Neumann’s theorem on matrix norms is an early example. We discuss the example of “identifiability”, a common property of nonsmooth functions associated with the existence of a smooth manifold of approximate critical points. Identifiability (or its synonym, “partial smoothness”) is the … Read more

A splitting minimization method on geodesic spaces

We present in this paper the alternating minimization method on CAT(0) spaces for solving unconstraints convex optimization problems. Under the assumption that the function being minimize is convex, we prove that the sequence generated by our algorithm converges to a minimize point. The results presented in this paper are the first ones of this type … Read more

An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization

We consider optimization problems with an objective function that is the sum of two convex terms: one is smooth and given by a black-box oracle, and the other is general but with a simple, known structure. We first present an accelerated proximal gradient (APG) method for problems where the smooth part of the objective function … Read more

Projection: A Unified Approach to Semi-Infinite Linear Programs and Duality in Convex Programming

Fourier-Motzkin elimination is a projection algorithm for solving finite linear programs. We extend Fourier-Motzkin elimination to semi-infinite linear programs which are linear programs with finitely many variables and infinitely many constraints. Applying projection leads to new characterizations of important properties for primal-dual pairs of semi-infinite programs such as zero duality gap, feasibility, boundedness, and solvability. … Read more

A Perturbed Sums of Squares Theorem for Polynomial Optimization and its Applications

We consider a property of positive polynomials on a compact set with a small perturbation. When applied to a Polynomial Optimization Problem (POP), the property implies that the optimal value of the corresponding SemiDefinite Programming (SDP) relaxation with sufficiently large relaxation order is bounded from below by $(f^¥ast – ¥epsilon)$ and from above by $f^¥ast … Read more

Gradient methods for convex minimization: better rates under weaker conditions

The convergence behavior of gradient methods for minimizing convex differentiable functions is one of the core questions in convex optimization. This paper shows that their well-known complexities can be achieved under conditions weaker than the commonly assumed ones. We relax the common gradient Lipschitz-continuity condition and strong convexity condition to ones that hold only over … Read more

Smoothness Properties of a Regularized Gap Function for Quasi-Variational Inequalities

This article studies continuity and differentiability properties for a reformulation of a finite-dimensional quasi-variational inequality (QVI) problem using a regularized gap function approach. For a special class of QVIs, this gap function is continuously differentiable everywhere, in general, however, it has nondifferentiability points. We therefore take a closer look at these nondifferentiability points and show, … Read more

On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems

We present two modified versions of the primal-dual splitting algorithm relying on forward-backward splitting proposed in [21] for solving monotone inclusion problems. Under strong monotonicity assumptions for some of the operators involved we obtain for the sequences of iterates that approach the solution orders of convergence of ${\cal {O}}(\frac{1}{n})$ and ${\cal {O}}(\omega^n)$, for $\omega \in … Read more

An Augmented Lagrangian Method for Conic Convex Programming

We propose a new first-order augmented Lagrangian algorithm ALCC for solving convex conic programs of the form min{rho(x)+gamma(x): Ax-b in K, x in chi}, where rho and gamma are closed convex functions, and gamma has a Lipschitz continuous gradient, A is mxn real matrix, K is a closed convex cone, and chi is a “simple” … Read more

A generalization of the Lowner-John’s ellipsoid theorem

We address the following generalization $P$ of the Lowner-John’s ellipsoid problem. Given a (non necessarily convex) compact set $K\subset R^n$ and an even integer $d, find an homogeneous polynomial $g$ of degree $d$ such that $K\subset G:=\{x:g(x)\leq1\}$ and $G$ has minimum volume among all such sets. We show that $P$ is a convex optimization problem … Read more