Do You Trust Derivatives or Differences?

We analyze the relationship between the noise level of a function and the accuracy and reliability of derivatives and difference estimates. We derive and empirically validate measures of quality for both derivatives and difference estimates. Using these measures, we quantify the accuracy of derivatives and differences in terms of the noise level of the function. … Read more

Error Forgetting of Bregman Iteration

This short article analyzes an interesting property of the Bregman iterative procedure, which is equivalent to the augmented Lagrangian method after a change of variable, for minimizing a convex piece-wise linear function J(x) subject to linear constraints Ax=b. The procedure obtains its solution by solving a sequence of unconstrained subproblems of minimizing J(x)+1/2||Ax-b^k||^2, where b^k … Read more

On Traveling Salesman Games with Asymmetric Costs

We consider cooperative traveling salesman games with non-negative asymmetric costs satisfying the triangle inequality. We construct a stable cost allocation with budget balance guarantee equal to the Held-Karp integrality gap for the asymmetric traveling salesman problem, using the parsimonious property and a previously unknown connection to linear production games. We also show that our techniques … Read more

Global Optimization of Nonlinear Network Design

A novel approach for obtaining globally optimal solutions to design of networks with nonlinear resistances and potential driven flows is proposed. The approach is applicable to networks where the potential loss on an edge in the network is governed by a convex and strictly monotonically increasing function of flow rate. We introduce a relaxation of … Read more

A barrier-based smoothing proximal point algorithm for NCPs over closed convex cones

We present a new barrier-based method of constructing smoothing approximations for the Euclidean projector onto closed convex cones. These smoothing approximations are used in a smoothing proximal point algorithm to solve monotone nonlinear complementarity problems (NCPs) over a convex cones via the normal map equation. The smoothing approximations allow for the solution of the smoothed … Read more

An Alternating Direction Method for Chance-Constrained Optimization Problems with Discrete Distributions

We consider a chance-constrained optimization problem (CCOP), where the random variables follow finite discrete distributions. The problem is in general nonconvex and can be reformulated as a mixed-integer program. By exploiting the special structure of the probabilistic constraint, we propose an alternating direction method for finding suboptimal solutions of CCOP. At each iteration, this method … Read more

Semi-continuous network flow problems

We consider semi-continuous network flow problems, that is, a class of network flow problems where some of the variables are restricted to be semi-continuous. We introduce the semi-continuous inflow set with variable upper bounds as a relaxation of general semi-continuous network flow problems. Two particular cases of this set are considered, for which we present … Read more

Containment problems for polytopes and spectrahedra

We study the computational question whether a given polytope or spectrahedron $S_A$ (as given by the positive semidefiniteness region of a linear matrix pencil $A(x)$) is contained in another one $S_B$. First we classify the computational complexity, extending results on the polytope/poly\-tope-case by Gritzmann and Klee to the polytope/spectrahedron-case. For various restricted containment problems, NP-hardness … Read more

Using Inexact Gradients in a Multilevel Optimization Algorithm

Many optimization algorithms require gradients of the model functions, but computing accurate gradients can be computationally expensive. We study the implications of using inexact gradients in the context of the multilevel optimization algorithm MGOpt. MGOpt recursively uses (typically cheaper) coarse models to obtain search directions for finer-level models. However, MGOpt requires the gradient on the … Read more

On the convergence of decomposition methods for multi-stage stochastic convex programs

We prove the almost-sure convergence of a class of sampling-based nested decomposition algorithms for multistage stochastic convex programs in which the stage costs are general convex functions of the decisions, and uncertainty is modelled by a scenario tree. As special cases, our results imply the almost-sure convergence of SDDP, CUPPS and DOASA when applied to … Read more