Convex Approximations of Chance Constrained Programs

We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given (close to one) probability, a system of randomly perturbed convex constraints. Our goal is to build a computationally tractable approximation of this (typically intractable) problem, i.e., an explicitly given convex optimization program with the feasible … Read more

An incremental method for solving convex finite minmax problems

We introduce a new approach to minimizing a function defined as the pointwise maximum over finitely many convex real functions (next referred to as the “component functions”), with the aim of working on the basis of “incomplete knowledge” of the objective function. In fact, a descent algorithm is proposed which does not necessarily require at … Read more

Pattern Search Method for Discrete L_1 – Approximation

We propose a pattern search method to solve a classical nonsmooth optimization problem. In a deep analogy with pattern search methods for linear constrained optimization, the set of search directions at each iteration is defined in such a way that it conforms to the local geometry of the set of points of nondifferentiability near the … Read more

Scenario Approximations of Chance Constraints

We consider an optimization problem of minimization of a linear function subject to chance constraints. In the multidimensional case this problem is, generically, a problem of minimizing under a nonconvex and difficult to compute constraints and as such is computationally intractable. We investigate the potential of conceptually simple scenario approximation of the chance constraints. The … Read more

Convergent relaxations of polynomial matrix inequalities and static output feedback

Using a moment interpretation of recent results on sum-of-squares decompositions of non-negative polynomial matrices, we propose a hierarchy of convex linear matrix inequality (LMI) relaxations to solve non-convex polynomial matrix inequality (PMI) optimization problems, including bilinear matrix inequality (BMI) problems. This hierarchy of LMI relaxations generates a monotone sequence of lower bounds that converges to … Read more

On complexity of stochastic programming problems

The main focus of this paper is discussion of complexity of stochastic programming problems. We argue that two-stage (linear) stochastic programming problems with recourse can be solved with a reasonable accuracy by using Monte Carlo sampling techniques, while multi-stage stochastic programs, in general, are intractable. We also discuss complexity of chance constrained problems and multi-stage … Read more

Invariance and efficiency of convex representations

We consider two notions for the representations of convex cones: $G$-representation and lifted-$G$-representation. The former represents a convex cone as a slice of another; the latter allows in addition, the usage of auxiliary variables in the representation. We first study the basic properties of these representations. We show that some basic properties of convex cones … Read more

Hyperbolic Programs, and Their Derivative Relaxations

We study the algebraic and facial structures of hyperbolic programs, and examine natural relaxations of hyperbolic programs, the relaxations themselves being hyperbolic programs. Citation TR 1406, School of Operations Research, Cornell University, Ithaca, NY 14853, U.S., 3/04 Article Download View Hyperbolic Programs, and Their Derivative Relaxations

On Conically Ordered Convex Programs

In this paper we study a special class of convex optimization problems called {\em conically ordered convex programs}\/ (COCP), where the feasible region is given as the level set of a vector-valued nonlinear mapping, expressed as a nonnegative combination of convex functions. The nonnegativity of the vectors is defined using a pre-described conic ordering. The … Read more

On an Extension of Condition Number Theory to Non-Conic Convex Optimization

The purpose of this paper is to extend, as much as possible, the modern theory of condition numbers for conic convex optimization: z_* := min_x {c’x | Ax-b \in C_Y, x \in C_X }, to the more general non-conic format: (GP_d): z_* := min_x {c’x | Ax-b \in C_Y, x \in P}, where P is … Read more