A feasible active set method for strictly convex problems with simple bounds

A primal-dual active set method for quadratic problems with bound constraints is presented which extends the infeasible active set approach of [K. Kunisch and F. Rendl. An infeasible active set method for convex problems with simple bounds. SIAM Journal on Optimization, 14(1):35-52, 2003]. Based on a guess of the active set, a primal-dual pair (x,α) … Read more

About [q]-regularity properties of collections of sets

We examine three primal space local Hoelder type regularity properties of finite collections of sets, namely, [q]-semiregularity, [q]-subregularity, and uniform [q]-regularity as well as their quantitative characterizations. Equivalent metric characterizations of the three mentioned regularity properties as well as a sufficient condition of [q]-subregularity in terms of Frechet normals are established. The relationships between [q]-regularity … Read more

Bundle methods in the XXIst century: A bird’s-eye view

Bundle methods are often the algorithms of choice for nonsmooth convex optimization, especially if accuracy in the solution and reliability are a concern. We review several algorithms based on the bundle methodology that have been developed recently and that, unlike their forerunner variants, have the ability to provide exact solutions even if most of the … Read more

Information Relaxations, Duality, and Convex Dynamic Programs

We consider the information relaxation approach for calculating performance bounds for stochastic dynamic programs (DPs), following Brown, Smith, and Sun (2010). This approach generates performance bounds by solving problems with relaxed nonanticipativity constraints and a penalty that punishes violations of these nonanticipativity constraints. In this paper, we study DPs that have a convex structure and … Read more

Conic separation of finite sets:The homogeneous case

This work addresses the issue of separating two finite sets in $\mathbb{R}^n $ by means of a suitable revolution cone $$ \Gamma (z,y,s)= \{x \in \mathbb{R}^n : s\,\Vert x-z\Vert – y^T(x-z)=0\}.$$ The specific challenge at hand is to determine the aperture coefficient $s$, the axis $y$, and the apex $z$ of the cone. These parameters … Read more

Conic separation of finite sets: The non-homogeneous case

We address the issue of separating two finite sets in $\mathbb{R}^n $ by means of a suitable revolution cone $$ \Gamma (z,y,s)= \{x \in \mathbb{R}^n :\, s\,\Vert x-z\Vert – y^T(x-z)=0\}.$$ One has to select the aperture coefficient $s$, the axis $y$, and the apex $z$ in such a way as to meet certain optimal separation … Read more

Gauge optimization, duality, and applications

Gauge functions significantly generalize the notion of a norm, and gauge optimization, as defined by Freund (1987), seeks the element of a convex set that is minimal with respect to a gauge function. This conceptually simple problem can be used to model a remarkable array of useful problems, including a special case of conic optimization, … Read more

Completely Positive Reformulations for Polynomial Optimization

Polynomial optimization encompasses a very rich class of problems in which both the objective and constraints can be written in terms of polynomials on the decision variables. There is a well stablished body of research on quadratic polynomial optimization problems based on reformulations of the original problem as a conic program over the cone of … Read more

Accelerated Gradient Methods for Nonconvex Nonlinear and Stochastic Programming

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order … Read more

Conic Geometric Programming

We introduce and study conic geometric programs (CGPs), which are convex optimization problems that unify geometric programs (GPs) and conic optimization problems such as linear programs (LPs) and semidefinite programs (SDPs). A CGP consists of a linear objective function that is to be minimized subject to affine constraints, convex conic constraints, and upper bound constraints … Read more