BOUNDS AND APPROXIMATIONS FOR MULTISTAGE STOCHASTIC PROGRAMS

Consider (typically large) multistage stochastic programs, which are defined on scenario trees as the basic data structure. It is well known that the computational complexity of the solution depends on the size of the tree, which itself increases typically exponentially fast with its height, i.e. the number of decision stages. For this reason approximations which … Read more

The Jordan Algebraic Structure of the Circular Cone

In this paper, we study and analyze the algebraic structure of the circular cone. We establish a new efficient spectral decomposition, set up the Jordan algebra associated with the circular cone, and prove that this algebra forms a Euclidean Jordan algebra with a certain inner product. We then show that the cone of squares of … Read more

Bound-constrained polynomial optimization using only elementary calculations

We provide a monotone non increasing sequence of upper bounds $f^H_k$ ($k\ge 1$) converging to the global minimum of a polynomial $f$ on simple sets like the unit hypercube. The novelty with respect to the converging sequence of upper bounds in [J.B. Lasserre, A new look at nonnegativity on closed sets and polynomial optimization, SIAM … Read more

On the von Neumann and Frank-Wolfe Algorithms with Away Steps

The von Neumann algorithm is a simple coordinate-descent algorithm to determine whether the origin belongs to a polytope generated by a finite set of points. When the origin is in the interior of the polytope, the algorithm generates a sequence of points in the polytope that converges linearly to zero. The algorithm’s rate of convergence … Read more

A Flexible Coordinate Descent Method for Big Data Applications

We present a novel randomized block coordinate descent method for the minimization of a convex composite objective function. The method uses (approximate) partial second-order (curvature) information, so that the algorithm performance is more robust when applied to highly nonseparable or ill conditioned problems. We call the method Flexible Coordinate Descent (FCD). At each iteration of … Read more

On the unimodality of METRIC Approximation subject to normally distributed demands

METRIC Approximation is a popular model for supply chain management. We prove that it has a unimodal objective function when the demands of the n retailers are normally distributed. That allows us to solve it with a convergent sequence. This optimization method leads us to a closed-form equation of computational complexity O(n). Its solutions are … Read more

Stability of p-order metric regularity

This paper shows that $p$-order metric regularity is preserved under perturbation of H\”older continuous mapping of order $1/p$, which answers affirmatively a problem posed recently by Dontchev. Citation Technical report, Department of Mathematics, Chinese University of Hong Kong, 07/2015

Randomized Derivative-Free Optimization of Noisy Convex Functions

We propose STARS, a randomized derivative-free algorithm for unconstrained optimization when the function evaluations are contaminated with random noise. STARS takes dynamic, noise-adjusted smoothing step-sizes that minimize the least-squares error between the true directional derivative of a noisy function and its finite difference approximation. We provide a convergence rate analysis of STARS for solving convex … Read more

When are static and adjustable robust optimization with constraint-wise uncertainty equivalent?

Adjustable Robust Optimization (ARO) yields, in general, better worst-case solutions than static Robust Optimization (RO). However, ARO is computationally more difficult than RO. In this paper, we derive conditions under which the worst-case objective values of ARO and RO problems are equal. We prove that if the uncertainty is constraint-wise and the adjustable variables lie … Read more

Asynchronous Block-Iterative Primal-Dual Decomposition Methods for Monotone Inclusions

We propose new primal-dual decomposition algorithms for solving systems of inclusions involving sums of linearly composed maximally monotone operators. The principal innovation in these algorithms is that they are block-iterative in the sense that, at each iteration, only a subset of the monotone operators needs to be processed, as opposed to all operators as in … Read more