Successive Quadratic Upper-Bounding for Discrete Mean-Risk Minimization and Network Interdiction

The advances in conic optimization have led to its increased utilization for modeling data uncertainty. In particular, conic mean-risk optimization gained prominence in probabilistic and robust optimization. Whereas the corresponding conic models are solved efficiently over convex sets, their discrete counterparts are intractable. In this paper, we give a highly effective successive quadratic upper-bounding procedure … Read more

Submodularity in conic quadratic mixed 0-1 optimization

We describe strong convex valid inequalities for conic quadratic mixed 0-1 optimization. These inequalities can be utilized for solving numerous practical nonlinear discrete optimization problems from value-at-risk minimization to queueing system design, from robust interdiction to assortment optimization through appropriate conic quadratic mixed 0-1 relaxations. The inequalities exploit the submodularity of the binary restrictions and … Read more

Consistency Bounds and Support Recovery of D-stationary Solutions of Sparse Sample Average Approximations

This paper studies properties of the d(irectional)-stationary solutions of sparse sample average approximation (SAA) problems involving difference-of-convex (dc) sparsity functions under a deterministic setting. Such properties are investigated with respect to a vector which satisfies a verifiable assumption to relate the empirical SAA problem to the expectation minimization problem defined by an underlying data distribution. … Read more

On semi-infinite systems of convex polynomial inequalities and polynomial optimization problems

We consider the semi-infinite system of polynomial inequalities of the form \[ \mathbf{K}:=\{x\in\mathbb{R}^m\mid p(x,y)\ge 0,\ \ \forall y\in S\subseteq\mathbb{R}^n\}, \] where $p(X,Y)$ is a real polynomial in the variables $X$ and the parameters $Y$, the index set $S$ is a basic semialgebraic set in $\mathbb{R}^n$, $-p(X,y)$ is convex in $X$ for every $y\in S$. We … Read more

Escaping local minima with derivative-free methods: a numerical investigation

We apply a state-of-the-art, local derivative-free solver, Py-BOBYQA, to global optimization problems, and propose an algorithmic improvement that is beneficial in this context. Our numerical findings are illustrated on a commonly-used test set of global optimization problems and associated noisy variants, and on hyperparameter tuning for a machine learning test set. As Py-BOBYQA is a … Read more

The convex hull of a quadratic constraint over a polytope

A quadratically constrained quadratic program (QCQP) is an optimization problem in which the objective function is a quadratic function and the feasible region is defined by quadratic constraints. Solving non-convex QCQP to global optimality is a well-known NP-hard problem and a traditional approach is to use convex relaxations and branch-and-bound algorithms. This paper makes a … Read more

Reinforcement Learning via Parametric Cost Function Approximation for Multistage Stochastic Programming

The most common approaches for solving stochastic resource allocation problems in the research literature is to either use value functions (“dynamic programming”) or scenario trees (“stochastic programming”) to approximate the impact of a decision now on the future. By contrast, common industry practice is to use a deterministic approximation of the future which is easier … Read more

A Single Time-Scale Stochastic Approximation Method for Nested Stochastic Optimization

We study constrained nested stochastic optimization problems in which the objective function is a composition of two smooth functions whose exact values and derivatives are not available. We propose a single time-scale stochastic approximation algorithm, which we call the Nested Averaged Stochastic Approximation (NASA), to find an approximate stationary point of the problem. The algorithm … Read more

On High-order Model Regularization for Multiobjective Optimization

A p-order regularization method for finding weak stationary points of multiobjective optimization problems with constraints is introduced. Under Holder conditions on the derivatives of the objective functions, complexity results are obtained that generalize properties recently proved for scalar optimization. Article Download View On High-order Model Regularization for Multiobjective Optimization

A unified framework for Bregman proximal methods: subgradient, gradient, and accelerated gradient schemes

We provide a unified framework for analyzing the convergence of Bregman proximal first-order algorithms for convex minimization. Our framework hinges on properties of the convex conjugate and gives novel proofs of the convergence rates of the Bregman proximal subgradient, Bregman proximal gradient, and a new accelerated Bregman proximal gradient algorithm under fairly general and mild … Read more