Mitigating Interdiction Risk with Fortification

We study a network fortification problem on a directed network that channels single-commodity resources to fulfill random demands delivered to a subset of the nodes. For given a realization of demands, the malicious interdictor would disrupt the network in a manner that would maximize the total demand shortfalls subject to the interdictor’s constraints. To mitigate … Read more

Variable smoothing for convex optimization problems using stochastic gradients

We aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal-dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which is composed with … Read more

On the Stochastic Vehicle Routing Problem with time windows, correlated travel times, and time dependency

Most state-of-the-art algorithms for the Vehicle Routing Problem, such as Branch-and- Price algorithms or meta heuristics, rely on a fast feasibility test for a given route. We devise the fi rst approach to approximately check feasibility in the Stochastic Vehicle Routing Problem with time windows, where travel times are correlated and depend on the time of … Read more

A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems

In this paper, we describe and establish iteration-complexity of two accelerated composite gradient (ACG) variants to solve a smooth nonconvex composite optimization problem whose objective function is the sum of a nonconvex differentiable function f with a Lipschitz continuous gradient and a simple nonsmooth closed convex function h. When f is convex, the first ACG … Read more

General Convergence Rates Follow From Specialized Rates Assuming Growth Bounds

Often in the analysis of first-order methods, assuming the existence of a quadratic growth bound (a generalization of strong convexity) facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once for the growth bounded case. We give a meta-theorem for deriving general convergence rates from those assuming … Read more

Equivalences among the chi measure, Hoffman constant, and Renegar’s distance to ill-posedness

We show the equivalence among the following three condition measures of a full column rank matrix $A$: the chi measure, the signed Hoffman constant, and the signed distance to ill-posedness. The latter two measures are constructed via suitable collections of matrices obtained by flipping the signs of some rows of $A$. Our results provide a … Read more

Solving Chance-Constrained Problems via a Smooth Sample-Based Nonlinear Approximation

We introduce a new method for solving nonlinear continuous optimization problems with chance constraints. Our method is based on a reformulation of the probabilistic constraint as a quantile function. The quantile function is approximated via a differentiable sample average approximation. We provide theoretical statistical guarantees of the approximation, and illustrate empirically that the reformulation can … Read more

A Solution Approach to Distributionally Robust Chance-Constrained Assignment Problems

We study assignment problem with chance constraints (CAP) and its distributionally robust counterpart (DR-CAP). We present a technique for estimating big-M in such a formulation that takes advantage of the ambiguity set. We consider a 0-1 bilinear knapsack set to develop valid inequalities for CAP and DR-CAP. This is generalized to the joint chance constraint … Read more

Oracle-Based Algorithms for Binary Two-Stage Robust Optimization

In this work we study binary two-stage robust optimization problems with objective uncertainty. The concept of two-stage robustness is tailored for problems under uncertainty which have two different kinds of decision variables, first-stage decisions which have to be made here-and-now and second-stage decisions which can be determined each time after an uncertain scenario occured. We … Read more

Indefinite linearized augmented Lagrangian method for convex programming with linear inequality constraints

The augmented Lagrangian method (ALM) is a benchmark for tackling the convex optimization problem with linear constraints; ALM and its variants for linearly equality-constrained convex minimization models have been well studied in the literatures. However, much less attention has been paid to ALM for efficiently solving the linearly inequality-constrained convex minimization model. In this paper, … Read more