Convergence Analysis of Sample Average Approximation of Two-stage Stochastic Generalized Equations

A solution of two-stage stochastic generalized equations is a pair: a first stage solution which is independent of realization of the random data and a second stage solution which is a function of random variables. This paper studies convergence of the sample average approximation of two-stage stochastic nonlinear generalized equations. In particular an exponential rate … Read more

Convergence rates of proximal gradient methods via the convex conjugate

We give a novel proof of the $O(1/k)$ and $O(1/k^2)$ convergence rates of the proximal gradient and accelerated proximal gradient methods for composite convex minimization. The crux of the new proof is an upper bound constructed via the convex conjugate of the objective function. CitationTechnical Report, Carnegie Mellon University, January 2018.ArticleDownload View PDF

Large neighbourhood Benders’ search

A general enhancement of the Benders’ decomposition algorithm can be achieved through the improved use of large neighbourhood search heuristics within mixed-integer programming solvers. While mixed-integer programming solvers are endowed with an array of large neighbourhood search heuristics, their use is typically limited to finding solutions to the Benders’ decomposition master problem, which may be … Read more

Bounding and Counting Linear Regions of Deep Neural Networks

We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions. In particular, we study the number of linear regions, i.e. pieces, that a PWL function represented by a DNN can attain, both theoretically and empirically. We present (i) tighter upper and lower bounds for the maximum number of linear regions … Read more

Bicriteria Approximation of Chance Constrained Covering Problems

A chance constrained optimization problem involves constraints with random data which can be violated with probability bounded above by a prespecified small risk parameter. Such constraints are used to model reliability requirements in a variety of application areas such as finance, energy, service and manufacturing. Except under very special conditions, chance constrained problems are extremely … Read more

New Constraint Qualifications with Second-Order Properties in Nonlinear Optimization

In this paper we present and discuss new constraint qualifications to ensure the validity of well known second-order properties in nonlinear optimization. Here, we discuss conditions related to the so-called basic second-order condition, where a new notion of polar pairing is introduced in order to replace the polar operation, useful in the first-order case. We … Read more

Facets from Gadgets

We present a new tool for generating cutting planes for NP-hard combinatorial optimisation problems. It is based on the concept of gadgets — small subproblems that are “glued” together to form hard problems — which we borrow from the literature on computational complexity. Using gadgets, we are able to derive huge (exponentially large) new families … Read more

The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates

We propose two numerical algorithms for minimizing the sum of a smooth function and the composition of a nonsmooth function with a linear operator in the fully nonconvex setting. The iterative schemes are formulated in the spirit of the proximal and, respectively, proximal linearized alternating direction method of multipliers. The proximal terms are introduced through … Read more

Solving joint chance constrained problems using regularization and Benders’ decomposition

We consider stochastic programs with joint chance constraints with discrete random distribution. We reformulate the problem by adding auxiliary variables. Since the resulting problem has a non-regular feasible set, we regularize it by increasing the feasible set. We solve the regularized problem by iteratively solving a master problem while adding Benders’ cuts in a slave … Read more

Wasserstein Distributionally Robust Optimization and Variation Regularization

Wasserstein distributionally robust optimization (DRO) has recently achieved empirical success for various applications in operations research and machine learning, owing partly to its regularization effect. Although the connection between Wasserstein DRO and regularization has been established in several settings, existing results often require restrictive assumptions, such as smoothness or convexity, that are not satisfied by … Read more