Consistent Second-Order Conic Integer Programming for Learning Bayesian Networks

Bayesian Networks (BNs) represent conditional probability relations among a set of random variables (nodes) in the form of a directed acyclic graph (DAG), and have found diverse applications in knowledge discovery. We study the problem of learning the sparse DAG structure of a BN from continuous observational data. The central problem can be modeled as … Read more

Submodular Function Minimization and Polarity

Using polarity, we give an outer polyhedral approximation for the epigraph of set functions. For a submodular function, we prove that the corresponding polar relaxation is exact; hence, it is equivalent to the Lov\’asz extension. The polar approach provides an alternative proof for the convex hull description of the epigraph of a submodular function. Computational … Read more

Deciding the Feasibility of a Booking in the European Gas Market is coNP-hard

We show that deciding the feasibility of a booking (FB) in the European entry-exit gas market is coNP-hard if a nonlinear potential-based flow model is used. The feasibility of a booking can be characterized by polynomially many load flow scenarios with maximum potential-difference, which are computed by solving nonlinear potential-based flow models. We use this … Read more

Exact and Approximation Algorithms for Sparse PCA

Sparse Principal Component Analysis (SPCA) is designed to enhance the interpretability of traditional Principal Component Analysis (PCA) by optimally selecting a subset of features that comprise the first principal component. Given the NP-hard nature of SPCA, most current approaches resort to approximate solutions, typically achieved through tractable semidefinite programs (SDPs) or heuristic methods. To solve SPCA to … Read more

A decision theoretic approach for waveform design in joint radar communications applications

In this paper, we develop a decision theoretic approach for radar waveform design to maximize the joint radar communications performance in spectral coexistence. Specifically, we develop an adaptive waveform design approach by posing the design problem as a partially observable Markov decision process (POMDP), which leads to a hard optimization problem. We extend an approximate … Read more

Enhancements of Extended Locational Marginal Pricing – Advancing Practical Implementation

Price formation is critical to efficient wholesale electricity markets that support reliable operation and efficient investment. The Midcontinent Independent System Operator (MISO) developed the Extended Locational Marginal Pricing (ELMP) with the goal of more completely reflecting resource costs and generally improving price formation to better incent market participation. MISO developed ELMP based on the mathematical … Read more

Robustification of the k-Means Clustering Problem and Tailored Decomposition Methods: When More Conservative Means More Accurate

k-means clustering is a classic method of unsupervised learning with the aim of partitioning a given number of measurements into k clusters. In many modern applications, however, this approach suffers from unstructured measurement errors because the k-means clustering result then represents a clustering of the erroneous measurements instead of retrieving the true underlying clustering structure. … Read more

Riemannian conjugate gradient methods with inverse retraction

We propose a new class of Riemannian conjugate gradient (CG) methods, in which inverse retraction is used instead of vector transport for search direction construction. In existing methods, differentiated retraction is often used for vector transport to move the previous search direction to the current tangent space. However, a different perspective is adopted here, motivated … Read more

Two novel gradient methods with optimal step sizes

In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems. The proposed step sizes employ second-order information in order to obtain faster gradient-type methods. Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of … Read more

Gradient Sampling Methods with Inexact Subproblem Solves and Gradient Aggregation

Gradient sampling (GS) has proved to be an effective methodology for the minimization of objective functions that may be nonconvex and/or nonsmooth. The most computationally expensive component of a contemporary GS method is the need to solve a convex quadratic subproblem in each iteration. In this paper, a strategy is proposed that allows the use … Read more