Risk Analysis 101 — Robust-Optimization: the elephant in the robust-satisficing room

In 2001, info-gap decision theory re-invented the then 40-year old model of local robustness, known universally as radius of stability (circa 1960). Since then, this model of local robustness has been promoted by info-gap scholars as a reliable tool for the management of a severe uncertainty that is characterized by a vast (e.g. unbounded) uncertainty … Read more

Graver basis and proximity techniques for block-structured separable convex integer minimization problems

We consider N-fold 4-block decomposable integer programs, which simultaneously generalize N-fold integer programs and two-stage stochastic integer programs with N scenarios. In previous work [R. Hemmecke, M. Koeppe, R. Weismantel, A polynomial-time algorithm for optimizing over N-fold 4-block decomposable integer programs, Proc. IPCO 2010, Lecture Notes in Computer Science, vol. 6080, Springer, 2010, pp. 219–229], … Read more

Deriving robust counterparts of nonlinear uncertain inequalities

In this paper we provide a systematic way to construct the robust counterpart of a nonlinear uncertain inequality that is concave in the uncertain parameters. We use convex analysis (support functions, conjugate functions, Fenchel duality) and conic duality in order to convert the robust counterpart into an explicit and computationally tractable set of constraints. It … Read more

STRONGLY REGULAR NONSMOOTH GENERALIZED EQUATIONS (REVISED)

This note suggests the implicit function theorem for generalized equations, unifying Robinson’s theorem for strongly regular generalized equations and Clarke’s implicit function theorem for equations with Lipschitz-continuous mappings. ArticleDownload View PDF

A GLOBALLY CONVERGENT STABILIZED SQP METHOD

Sequential quadratic programming (SQP) methods are a popular class of methods for nonlinearly constrained optimization. They are particularly effective for solving a sequence of related problems, such as those arising in mixed-integer nonlinear programming and the optimization of functions subject to differential equation constraints. Recently, there has been considerable interest in the formulation of \emph{stabilized} … Read more

An algorithmic characterization of P-matricity

It is shown that a matrix $M$ is a P-matrix if and only if, whatever is the vector $q$, the Newton-min algorithm does not cycle between two points when it is used to solve the linear complementarity problem $0\leq x\perp (Mx+q)\geq0$. CitationInria research report RR-8004ArticleDownload View PDF

On the cp-rank and minimal cp factorizations of a completely positive matrix

We show that the maximal cp-rank of $n\times n$ completely positive matrices is attained at a positive-definite matrix on the boundary of the cone of $n\times n$ completely positive matrices, thus answering a long standing question. We also show that the maximal cp-rank of $5\times 5$ matrices equals six, which proves the famous Drew-Johnson-Loewy conjecture … Read more

A primal-dual splitting algorithm for finding zeros of sums of maximally monotone operators

We consider the primal problem of finding the zeros of the sum of a maximally monotone operator with the composition of another maximally monotone operator with a linear continuous operator and a corresponding dual problem formulated by means of the inverse operators. A primal-dual splitting algorithm which simultaneously solves the two problems in finite-dimensional spaces … Read more

The Nonnegative $ Norm Minimization under Generalized hBcmatrix Measurement

In this paper, we consider the $l_0$ norm minimization problem with linear equation and nonnegativity constraints. By introducing the concept of generalized $Z$-matrix for a rectangular matrix, we show that this $l_0$ norm minimization with such a kind of measurement matrices and nonnegative observations can be exactly solved via the corresponding $l_p$ ($0

A Family of Second-Order Methods for Convex L1-Regularized Optimization

This paper is concerned with the minimization of an objective that is the sum of a convex function $f$ and an $\ell_1$ regularization term. Our interest is in methods that incorporate second-order information about the function $f$ to accelerate convergence. We describe a semi-smooth Newton framework that can be used to generate a variety of … Read more