The Fastest Known Globally Convergent First-Order Method for Minimizing Strongly Convex Functions

We design and analyze a novel gradient-based algorithm for unconstrained convex optimization. When the objective function is $m$-strongly convex and its gradient is $L$-Lipschitz continuous, the iterates and function values converge linearly to the optimum at rates $\rho$ and $\rho^2$, respectively, where $\rho = 1-\sqrt{m/L}$. These are the fastest known guaranteed linear convergence rates for … Read more

On Matroid Parity and Matching Polytopes

The matroid parity (MP) problem is a powerful (and NP-hard) extension of the matching problem. Whereas matching polytopes are well understood, little is known about MP polytopes. We prove that, when the matroid is laminar, the MP polytope is affinely congruent to a perfect b-matching polytope. From this we deduce that, even when the matroid … Read more

Single-Machine Common Due Date Total Earliness/Tardiness Scheduling with Machine Unavailability

Research on non-regular performance measures is at best scarce in the deterministic machine scheduling literature with machine unavailability constraints. Moreover, almost all existing works in this area assume either that processing on jobs interrupted by an interval of machine unavailability may be resumed without any additional setup/processing or that all prior processing is lost. In … Read more

MPC as a DVI: Implications on Sampling Rates and Accuracy

We show that the evolution of a dynamical system driven by controls obtained by the solution of an embedded optimization problem (as done in MPC) can be cast as a differential variational inequality (DVI). The DVI abstraction reveals that standard sampled-data MPC implementations (in which the control law is computed using states that are sampled … Read more

Learning Enabled Optimization: Towards a Fusion of Statistical Learning and Stochastic Optimization

Several emerging applications, such as “Analytics of Things” and “Integrative Analytics” call for a fusion of statistical learning (SL) and stochastic optimization (SO). The Learning Enabled Optimization paradigm fuses concepts from these disciplines in a manner which not only enriches both SL and SO, but also provides a framework which supports rapid model updates and … Read more

An Introduction to Multi-Objective Simulation Optimization

The multi-objective simulation optimization (MOSO) problem is a nonlinear multi-objective optimization problem in which multiple simultaneous and conflicting objective functions can only be observed with stochastic error. We provide an introduction to MOSO at the advanced tutorial level, aimed at researchers and practitioners who wish to begin working in this emerging area. Our focus is … Read more

D-OPTIMAL DESIGN FOR MULTIVARIATE POLYNOMIAL REGRESSION VIA THE CHRISTOFFEL FUNCTION AND SEMIDEFINITE RELAXATIONS

We present a new approach to the design of D-optimal experiments with multivariate polynomial regressions on compact semi-algebraic design spaces. We apply the moment-sum-of-squares hierarchy of semidefinite programming problems to solve numerically and approximately the optimal design problem. The geometry of the design is recovered with semidefinite programming duality theory and the Christoffel polynomial. ArticleDownload … Read more

Lyapunov rank of polyhedral positive operators

If K is a closed convex cone and if L is a linear operator having L(K) a subset of K, then L is a positive operator on K and L preserves inequality with respect to K. The set of all positive operators on K is denoted by pi(K). If J is the dual of K, … Read more

General parameterized proximal point algorithm with applications in the statistical learning

In the literature, there are a few researches for the proximal point algorithm (PPA) with some parameters in the proximal matrix, especially for the multi-objective optimization problems. Introducing some parameters to the PPA will make it more attractive and flexible. By using the unified framework of the classical PPA and constructing a parameterized proximal matrix, … Read more

Forecast-based scenario-tree generation method

Sometimes, the best available information about an uncertain future is a single forecast. On the other hand, stochastic-programming models need future data in the form of scenario trees. While a single forecast does not provide enough information to construct a scenario tree, a forecast combined with historical data does—but none of the standard scenario-generation methods … Read more