Eigenvalue programming beyond matrices

In this paper we analyze and solve eigenvalue programs, which consist of the task of minimizing a function subject to constraints on the “eigenvalues” of the decision variable. Here, by making use of the FTvN systems framework introduced by Gowda, we interpret “eigenvalues” in a broad fashion going beyond the usual eigenvalues of matrices. This … Read more

Cone product reformulation for global optimization

In this paper, we study nonconvex optimization problems involving sum of linear times convex (SLC) functions as well as conic constraints belonging to one of the five basic cones, that is, linear cone, second order cone, power cone, exponential cone, and semidefinite cone. By using the Reformulation Perspectification Technique, we can obtain a convex relaxation … Read more

Range of the displacement operator of PDHG with applications to quadratic and conic programming

Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al. analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on … Read more

A Criterion Space Search Feasibility Pump Heuristic for Solving Maximum Multiplicative Programs

We study a class of nonlinear optimization problems with diverse practical applications, particularly in cooperative game theory. These problems are referred to as Maximum Multiplicative Programs (MMPs), and can be conceived as instances of “Optimization Over the Frontier” in multiobjective optimization. To solve MMPs, we introduce a feasibility pump-based heuristic that is specifically designed to … Read more

Accelerated Gradient Descent via Long Steps

Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent’s state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent’s first accelerated convergence rate in this setting. Namely, we … Read more

Using orthogonally structured positive bases for constructing positive k-spanning sets with cosine measure guarantees

\(\) Positive spanning sets span a given vector space by nonnegative linear combinations of their elements. These have attracted significant attention in recent years, owing to their extensive use in derivative-free optimization. In this setting, the quality of a positive spanning set is assessed through its cosine measure, a geometric quantity that expresses how well … Read more

A generalized asymmetric forward-backward-adjoint algorithm for convex-concave saddle-point problem

The convex-concave minimax problem, known as the saddle-point problem, has been extensively studied from various aspects including the algorithm design, convergence condition and complexity. In this paper, we propose a generalized asymmetric forward-backward-adjoint (G-AFBA) algorithm to solve such a problem by utilizing both the proximal techniques and the interactive information of primal-dual updates. Except enjoying … Read more

Limited memory gradient methods for unconstrained optimization

The limited memory steepest descent method (Fletcher, 2012) for unconstrained optimization problems stores a few past gradients to compute multiple stepsizes at once. We review this method and propose new variants. For strictly convex quadratic objective functions, we study the numerical behavior of different techniques to compute new stepsizes. In particular, we introduce a method … Read more

Adaptive Consensus: A network pruning approach for decentralized optimization

We consider network-based decentralized optimization problems, where each node in the network possesses a local function and the objective is to collectively attain a consensus solution that minimizes the sum of all the local functions. A major challenge in decentralized optimization is the reliance on communication which remains a considerable bottleneck in many applications. To … Read more

Inexact Newton methods with matrix approximation by sampling for nonlinear least-squares and systems

We develop and analyze stochastic inexact Gauss-Newton methods for nonlinear least-squares problems and inexact Newton methods for nonlinear systems of equations. Random models are formed using suitable sampling strategies for the matrices involved in the deterministic models. The analysis of the expected number of iterations needed in the worst case to achieve a desired level … Read more