Submodular Function Minimization and Polarity

Using polarity, we give an outer polyhedral approximation for the epigraph of set functions. For a submodular function, we prove that the corresponding polar relaxation is exact; hence, it is equivalent to the Lov\’asz extension. The polar approach provides an alternative proof for the convex hull description of the epigraph of a submodular function. Computational … Read more

Exact and Approximation Algorithms for Sparse PCA

Sparse Principal Component Analysis (SPCA) is designed to enhance the interpretability of traditional Principal Component Analysis (PCA) by optimally selecting a subset of features that comprise the first principal component. Given the NP-hard nature of SPCA, most current approaches resort to approximate solutions, typically achieved through tractable semidefinite programs (SDPs) or heuristic methods. To solve SPCA to … Read more

A decision theoretic approach for waveform design in joint radar communications applications

In this paper, we develop a decision theoretic approach for radar waveform design to maximize the joint radar communications performance in spectral coexistence. Specifically, we develop an adaptive waveform design approach by posing the design problem as a partially observable Markov decision process (POMDP), which leads to a hard optimization problem. We extend an approximate … Read more

Enhancements of Extended Locational Marginal Pricing – Advancing Practical Implementation

Price formation is critical to efficient wholesale electricity markets that support reliable operation and efficient investment. The Midcontinent Independent System Operator (MISO) developed the Extended Locational Marginal Pricing (ELMP) with the goal of more completely reflecting resource costs and generally improving price formation to better incent market participation. MISO developed ELMP based on the mathematical … Read more

Robustification of the k-Means Clustering Problem and Tailored Decomposition Methods: When More Conservative Means More Accurate

k-means clustering is a classic method of unsupervised learning with the aim of partitioning a given number of measurements into k clusters. In many modern applications, however, this approach suffers from unstructured measurement errors because the k-means clustering result then represents a clustering of the erroneous measurements instead of retrieving the true underlying clustering structure. … Read more

Riemannian conjugate gradient methods with inverse retraction

We propose a new class of Riemannian conjugate gradient (CG) methods, in which inverse retraction is used instead of vector transport for search direction construction. In existing methods, differentiated retraction is often used for vector transport to move the previous search direction to the current tangent space. However, a different perspective is adopted here, motivated … Read more

Two novel gradient methods with optimal step sizes

In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems. The proposed step sizes employ second-order information in order to obtain faster gradient-type methods. Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of … Read more

Two-Stage Sort Planning for Express Parcel Delivery

Recent years have brought significant changes in the operations of parcel transportation services, most notably due to the growing demand for e-commerce worldwide. Parcel sortation systems are used within sorting facilities in these transportation networks to enable the execution of effective consolidation plans with low per-unit handling and shipping costs. Designing and implementing effective parcel … Read more

Gradient Sampling Methods with Inexact Subproblem Solves and Gradient Aggregation

Gradient sampling (GS) has proved to be an effective methodology for the minimization of objective functions that may be nonconvex and/or nonsmooth. The most computationally expensive component of a contemporary GS method is the need to solve a convex quadratic subproblem in each iteration. In this paper, a strategy is proposed that allows the use … Read more

Linearization and Parallelization Schemes for Convex Mixed-Integer Nonlinear Optimization

We develop and test linearization and parallelization schemes for convex mixed-integer nonlinear programming. Several linearization approaches are proposed for LP/NLP based branch-and-bound. Some of these approaches strengthen the linear approximation to nonlinear constraints at the root node and some at the other branch-and-bound nodes. Two of the techniques are specifically applicable to commonly found univariate … Read more