A Subgradient Projection Method with Outer Approximation for Solving Semidefinite Programming Problems

We explore the combination of subgradient projection with outer approximation to solve semidefinite programming problems. We compare several ways to construct outer approximations using the problem structure. The resulting approach enjoys the strengths of both subgradient projection and outer approximation methods. Preliminary computational results on the semidefinite programming relaxations of graph partitioning and max-cut show … Read more

Projection onto hyperbolicity cones and beyond: a dual Frank-Wolfe approach

We discuss the problem of projecting a point onto an arbitrary hyperbolicity cone from both theoretical and numerical perspectives. While hyperbolicity cones are furnished with a generalization of the notion of eigenvalues, obtaining closed form expressions for the projection operator as in the case of semidefinite matrices is an elusive endeavour. To address that we … Read more

An exponential cone representation of the general power cone

Chandrasekaran and Shah (2017) used the exponential cone to model the second-order cone in demonstration of its modeling capabilities. We simplify and extend this result to general power cones. Article Download View An exponential cone representation of the general power cone

The Role of Level-Set Geometry on the Performance of PDHG for Conic Linear Optimization

We consider solving huge-scale instances of (convex) conic linear optimization problems, at the scale where matrix-factorization-free methods are attractive or necessary. The restarted primal-dual hybrid gradient method (rPDHG) — with heuristic enhancements and GPU implementation — has been very successful in solving huge-scale linear programming (LP) problems; however its application to more general conic convex … Read more

Robust System Identification: Finite-sample Guarantees and Connection to Regularization

We address the problem of identifying a stable linear time-invariant system from a single sample trajectory. The least squares estimate (LSE) is a commonly used algorithm for this purpose. However, LSE may exhibit poor identification errors when the number of samples is small. To mitigate the issue, we introduce the robust LSE, which integrates robust … Read more

Tighter yet more tractable relaxations and nontrivial instance generation for sparse standard quadratic optimization

The Standard Quadratic optimization Problem (StQP), arguably the simplest among all classes of NP-hard optimization problems, consists of extremizing a quadratic form (the simplest nonlinear polynomial) over the standard simplex (the simplest polytope/compact feasible set). As a problem class, StQPs may be nonconvex with an exponential number of inefficient local solutions. StQPs arise in a … Read more

Exploiting cone approximations in an augmented Lagrangian method for conic optimization

We propose an algorithm for general nonlinear conic programming which does not require the knowledge of the full cone, but rather a simpler, more tractable, approximation of it. We prove that the algorithm satisfies a strong global convergence property in the sense that it generates a strong sequential optimality condition. In particular, a KKT point … Read more

On the integrality gap of the Complete Metric Steiner Tree Problem via a novel formulation

In this work, we compute the lower bound of the integrality gap of the Metric Steiner Tree Problem (MSTP) on a graph for some small values of number of nodes and terminals. After debating about some limitations of the most used formulation for the Steiner Tree Problem, namely the Bidirected Cut Formulation, we introduce a … Read more

Counterfactual Explanations for Linear Optimization

The concept of counterfactual explanations (CE) has emerged as one of the important concepts to understand the inner workings of complex AI systems. In this paper, we translate the idea of CEs to linear optimization and propose, motivate, and analyze three different types of CEs: strong, weak, and relative. While deriving strong and weak CEs … Read more

Sensitivity Analysis in Dantzig-Wolfe Decomposition

Dantzig-Wolfe decomposition is a well-known classical method for solving huge linear optimization problems with a block-angular structure. The most computationally expensive process in the method is pricing: solving block subproblems for a dual variable to produce new columns. Therefore, when we want to solve a slightly perturbated problem in which the block-angular structure is preserved … Read more