Smart “Predict, then Optimize”

Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is to predict, then optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in a downstream optimization … Read more

A Random Block-Coordinate Douglas-Rachford Splitting Method with Low Computational Complexity for Binary Logistic Regression

In this paper, we propose a new optimization algorithm for sparse logistic regression based on a stochastic version of the Douglas Rachford splitting method. Our algorithm sweeps the training set by randomly selecting a mini-batch of data at each iteration, and it allows us to update the variables in a block coordinate manner. Our approach … Read more

GEP-MSCRA for computing the group zero-norm regularized least squares estimator

This paper concerns with the group zero-norm regularized least squares estimator which, in terms of the variational characterization of the zero-norm, can be obtained from a mathematical program with equilibrium constraints (MPEC). By developing the global exact penalty for the MPEC, this estimator is shown to arise from an exact penalization problem that not only … Read more

A single potential governing convergence of conjugate gradient, accelerated gradient and geometric descent

Nesterov’s accelerated gradient (AG) method for minimizing a smooth strongly convex function $f$ is known to reduce $f({\bf x}_k)-f({\bf x}^*)$ by a factor of $\epsilon\in(0,1)$ after $k=O(\sqrt{L/\ell}\log(1/\epsilon))$ iterations, where $\ell,L$ are the two parameters of smooth strong convexity. Furthermore, it is known that this is the best possible complexity in the function-gradient oracle model of … Read more

Sum of squares certificates for stability of planar, homogeneous, and switched systems

We show that existence of a global polynomial Lyapunov function for a homogeneous polynomial vector field or a planar polynomial vector field (under a mild condition) implies existence of a polynomial Lyapunov function that is a sum of squares (sos) and that the negative of its derivative is also a sum of squares. This result … Read more

A Stochastic Trust Region Algorithm Based on Careful Step Normalization

An algorithm is proposed for solving stochastic and finite sum minimization problems. Based on a trust region methodology, the algorithm employs normalized steps, at least as long as the norms of the stochastic gradient estimates are within a specified interval. The complete algorithm—which dynamically chooses whether or not to employ normalized steps—is proved to have … Read more

Production Lot Sizing with Immediately Observable Random Production Rate

To explore one impact of the information available by adding sensors in a classical production planning setting, we consider a continuous time infinite horizon lot-sizing model where a single product is manufactured on a single machine. Each time manufacturing restarts, a random production rate is realized, and production continues at this rate until the machine … Read more

Let’s Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence

Block coordinate descent (BCD) methods are widely-used for large-scale numerical optimization because of their cheap iteration costs, low memory requirements, amenability to parallelization, and ability to exploit problem structure. Three main algorithmic choices influence the performance of BCD methods: the block partitioning strategy, the block selection rule, and the block update rule. In this paper … Read more

An Algorithm for Piecewise Linear Optimization of Objective Functions in Abs-normal Form

In the paper [11] we derived first order (KKT) and second order (SSC) optimality conditions for functions defined by evaluation programs involving smooth elementals and absolute values. For this class of problems we showed in [12] that the natural algorithm of successive piecewise linear optimization with a proximal term (SPLOP) achieves a linear or even … Read more