Memory-efficient structured convex optimization via extreme point sampling

Memory is a key computational bottleneck when solving large-scale convex optimization problems such as semidefinite programs (SDPs). In this paper, we focus on the regime in which storing an n × n matrix decision variable is prohibitive. To solve SDPs in this regime, we develop a randomized algorithm that returns a random vector whose covariance … Read more

A simplified treatment of Ramana’s exact dual for semidefinite programming

In semidefinite programming the dual may fail to attain its optimal value and there could be a duality gap, i.e., the primal and dual optimal values may differ. In a striking paper, Ramana proposed a polynomial size extended dual that does not have these deficiencies and yields a number of fundamental results in complexity theory. … Read more

Manifold Identification for Ultimately Communication-Efficient Distributed Optimization

This work proposes a progressive manifold identification approach for distributed optimization with sound theoretical justifications to greatly reduce both the rounds of communication and the bytes communicated per round for partly-smooth regularized problems such as the $\ell_1$- and group-LASSO-regularized ones. Our two-stage method first uses an inexact proximal quasi-Newton method to iteratively identify a sequence … Read more

An inexact version of the symmetric proximal ADMM for solving separable convex optimization

In this paper, we propose and analyze an inexact version of the symmetric proximal alternating direction method of multipliers (ADMM) for solving linearly constrained optimization problems. Basically, the method allows its first subproblem to be solved inexactly in such way that a relative approximate criterion is satisfied. In terms of the iteration number $k$, we … Read more

An Alternative Perspective on Copositive and Convex Relaxations of Nonconvex Quadratic Programs

We study convex relaxations of nonconvex quadratic programs. We identify a family of so-called feasibility preserving convex relaxations, which includes the well-known copositive and doubly nonnegative relaxations, with the property that the convex relaxation is feasible if and only if the nonconvex quadratic program is feasible. We observe that each convex relaxation in this family … Read more

Two novel gradient methods with optimal step sizes

In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems. The proposed step sizes employ second-order information in order to obtain faster gradient-type methods. Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of … Read more

Gradient Sampling Methods with Inexact Subproblem Solves and Gradient Aggregation

Gradient sampling (GS) has proved to be an effective methodology for the minimization of objective functions that may be nonconvex and/or nonsmooth. The most computationally expensive component of a contemporary GS method is the need to solve a convex quadratic subproblem in each iteration. In this paper, a strategy is proposed that allows the use … Read more

The Equivalence of Fourier-based and Wasserstein Metrics on Imaging Problems

We investigate properties of some extensions of a class of Fourier-based probability metrics, originally introduced to study convergence to equilibrium for the solution to the spatially homogeneous Boltzmann equation. At difference with the original one, the new Fourier-based metrics are well-defined also for probability distributions with different centers of mass, and for discrete probability measures … Read more

Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step

In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in the authors’ previous paper, which we denote ICGALP, that allows for errors in the computation of several important quantities. In particular this allows one to compute some gradients, proximal terms, and/or linear minimization oracles in an inexact fashion … Read more

Decomposition Algorithms for Some Deterministic and Two-Stage Stochastic Single-Leader Multi-Follower Games

We consider a certain class of hierarchical decision problems that can be viewed as single-leader multi-follower games, and be represented by a virtual market coordinator trying to set a price system for traded goods, according to some criterion that balances supply and demand. The objective function of the market coordinator involves the decisions of many … Read more