Preconditioning for Generelized Jacobians with the ω-Condition Number

Preconditioning is essential in iterative methods for solving linear systems of equations. We study a nonclassic matrix condition number, the ω-condition number, in the context of optimal conditioning for low rank updating of positive definite matrices. For a positive definite matrix, this condition measure is the ratio of the arithmetic and geometric means of the … Read more

Eigenvalue programming beyond matrices

In this paper we analyze and solve eigenvalue programs, which consist of the task of minimizing a function subject to constraints on the “eigenvalues” of the decision variable. Here, by making use of the FTvN systems framework introduced by Gowda, we interpret “eigenvalues” in a broad fashion going beyond the usual eigenvalues of matrices. This … Read more

Cone product reformulation for global optimization

In this paper, we study nonconvex optimization problems involving sum of linear times convex (SLC) functions as well as conic constraints belonging to one of the five basic cones, that is, linear cone, second order cone, power cone, exponential cone, and semidefinite cone. By using the Reformulation Perspectification Technique, we can obtain a convex relaxation … Read more

Range of the displacement operator of PDHG with applications to quadratic and conic programming

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on … Read more

A novel Pareto-optimal cut selection strategy for Benders Decomposition

Decomposition methods can be used to create efficient solution algorithms for a wide range of optimization problems. For example, Benders Decomposition can be used to solve scenario-expanded two-stage stochastic optimization problems effectively. Benders Decomposition iteratively generates Benders cuts by solving a simplified version of an optimization problem, the so-called subproblem. The choice of the generated … Read more

An Integer Programming Approach To Subspace Clustering With Missing Data

In the Subspace Clustering with Missing Data (SCMD) problem, we are given a collection of n partially observed d-dimensional vectors. The data points are assumed to be concentrated near a union of low-dimensional subspaces. The goal of SCMD is to cluster the vectors according to their subspace membership and recover the underlying basis, which can … Read more

Resilient Relay Logistics Network Design: A k-Shortest Path Approach

Problem definition: We study the problem of designing large-scale resilient relay logistics hub networks. We propose a model of k-Shortest Path Network Design, which aims to improve a network’s efficiency and resilience through its topological configuration, by locating relay logistics hubs to connect each origin-destination pair with k paths of minimum lengths, weighted by their … Read more

Error estimate for regularized optimal transport problems via Bregman divergence

Regularization by the Shannon entropy enables us to efficiently and approximately solve optimal transport problems on a finite set. This paper is concerned with regularized optimal transport problems via Bregman divergence. We introduce the required properties for Bregman divergences, provide a non-asymptotic error estimate for the regularized problem, and show that the error estimate becomes … Read more

A Criterion Space Search Feasibility Pump Heuristic for Solving Maximum Multiplicative Programs

We study a class of nonlinear optimization problems with diverse practical applications, particularly in cooperative game theory. These problems are referred to as Maximum Multiplicative Programs (MMPs), and can be conceived as instances of “Optimization Over the Frontier” in multiobjective optimization. To solve MMPs, we introduce a feasibility pump-based heuristic that is specifically designed to … Read more

Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more