CONVERGENCE RATE OF GRADIENT BASED ADAPTIVE RESTART FOR ACCELERATED GRADIENT SCHEMES

The accelerated gradient algorithm is known to have non-monotonic, periodic convergence behavior in the high momentum regime. If important function parameters like the condition number are known, the momentum can be adjusted to get linear convergence. Unfortunately these parameters are usually not accessible, so instead heuristics are used for deciding when to restart. One of … Read more

A Primal-Dual Lifting Scheme for Two-Stage Robust Optimization

Two-stage robust optimization problems, in which decisions are taken both in anticipation of and in response to the observation of an unknown parameter vector from within an uncertainty set, are notoriously challenging. In this paper, we develop convergent hierarchies of primal (conservative) and dual (progressive) bounds for these problems that trade off the competing goals … Read more

A New Exact Algorithm to Optimize a Linear Function Over the Set of Efficient Solutions for Bi-objective Mixed Integer Linear Programs

We present the first (criterion space search) algorithm for optimizing a linear function over the set of efficient solutions of bi-objective mixed integer linear programs. The proposed algorithm is developed based on the Triangle Splitting Method (Boland et al. 2015b) which can find a full representation of the nondominated frontier of any bi-objective mixed integer … Read more

Underestimate Sequences via Quadratic Averaging

In this work we introduce the concept of an Underestimate Sequence (UES), which is a natural extension of Nesterov’s estimate sequence. Our definition of a UES utilizes three sequences, one of which is a lower bound (or under-estimator) of the objective function. The question of how to construct an appropriate sequence of lower bounds is … Read more

Index Policies and Performance Bounds for Dynamic Selection Problems

We consider dynamic selection problems, where a decision maker repeatedly selects a set of items from a larger collection of available items. A classic example is the dynamic assortment problem with demand learning, where a retailer chooses items to offer for sale subject to a display space constraint. The retailer may adjust the assortment over … Read more

Response to “Counterexample to global convergence of DSOS and SDSOS hierarchies”

In a recent note [8], the author provides a counterexample to the global convergence of what his work refers to as “the DSOS and SDSOS hierarchies” for polynomial optimization problems (POPs) and purports that this refutes claims in our extended abstract [4] and slides in [3]. The goal of this paper is to clarify that … Read more

Derivative-Free Robust Optimization by Outer Approximations

We develop an algorithm for minimax problems that arise in robust optimization in the absence of objective function derivatives. The algorithm utilizes an extension of methods for inexact outer approximation in sampling a potentially infinite-cardinality uncertainty set. Clarke stationarity of the algorithm output is established alongside desirable features of the model-based trust-region subproblems encountered. We … Read more

Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a … Read more

On the Construction of Converging Hierarchies for Polynomial Optimization Based on Certificates of Global Positivity

In recent years, techniques based on convex optimization and real algebra that produce converging hierarchies of lower bounds for polynomial minimization problems have gained much popularity. At their heart, these hierarchies rely crucially on Positivstellens\”atze from the late 20th century (e.g., due to Stengle, Putinar, or Schm\”udgen) that certify positivity of a polynomial on an … Read more

From Estimation to Optimization via Shrinkage

We study a class of quadratic stochastic programs where the distribution of random variables has unknown parameters. A traditional approach is to estimate the parameters using a maximum likelihood estimator (MLE) and to use this as input in the optimization problem. For the unconstrained case, we show that an estimator that “shrinks” the MLE towards … Read more