Fast Multilevel Algorithms for Compressive Principle Component Pursuit

Recovering a low-rank matrix from highly corrupted measurements arises in compressed sensing of structured high-dimensional signals (e.g., videos and hyperspectral images among others). Robust principal component analysis (RPCA), solved via principal component pursuit (PCP), recovers a low-rank matrix from sparse corruptions that are of unknown value and support by decomposing the observation matrix into two … Read more

On the Complexity of Testing Attainment of the Optimal Value in Nonlinear Optimization

We prove that unless P=NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can test whether the optimal value of a nonlinear optimization problem where the objective and constraints are given by low-degree polynomials is attained. If the degrees of these polynomials are fixed, our results along with previously-known “Frank-Wolfe type” theorems … Read more

Optimality conditions for minimizers at infinity in polynomial programming

In this paper we study necessary optimality conditions for the optimization problem $$\textrm{infimum}f_0(x) \quad \textrm{ subject to } \quad x \in S,$$ where $f_0 \colon \mathbb{R}^n \rightarrow \mathbb{R}$ is a polynomial function and $S \subset \mathbb{R}^n$ is a set defined by polynomial inequalities. Assume that the problem is bounded below and has the Mangasarian–Fromovitz property … Read more

Combining Progressive Hedging with a Frank-Wolfe Method to Compute Lagrangian Dual Bounds in Stochastic Mixed-Integer Programming

We present a new primal-dual algorithm for computing the value of the Lagrangian dual of a stochastic mixed-integer program (SMIP) formed by relaxing its nonanticipativity constraints. The algorithm relies on the well-known progressive hedging method, but unlike previous progressive hedging approaches for SMIP, our algorithm can be shown to converge to the optimal Lagrangian dual … Read more

Polytope conditioning and linear convergence of the Frank-Wolfe algorithm

It is known that the gradient descent algorithm converges linearly when applied to a strongly convex function with Lipschitz gradient. In this case the algorithm’s rate of convergence is determined by condition number of the function. In a similar vein, it has been shown that a variant of the Frank-Wolfe algorithm with away steps converges … Read more

An Extended Frank-Wolfe Method with “In-Face” Directions, and its Application to Low-Rank Matrix Completion

We present an extension of the Frank-Wolfe method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region. We present computational guarantees for the method that trade off efficiency in computing near-optimal solutions with upper bounds on the dimension of minimal faces of iterates. We apply our method to the low-rank … Read more

Noisy Euclidean distance realization: robust facial reduction and the Pareto frontier

We present two algorithms for large-scale low-rank Euclidean distance matrix completion problems, based on semidefinite optimization. Our first method works by relating cliques in the graph of the known distances to faces of the positive semidefinite cone, yielding a combinatorial procedure that is provably robust and parallelizable. Our second algorithm is a first order method … Read more

On the von Neumann and Frank-Wolfe Algorithms with Away Steps

The von Neumann algorithm is a simple coordinate-descent algorithm to determine whether the origin belongs to a polytope generated by a finite set of points. When the origin is in the interior of the polytope, the algorithm generates a sequence of points in the polytope that converges linearly to zero. The algorithm’s rate of convergence … Read more

Conditional Gradient Sliding for Convex Optimization

In this paper, we present a new conditional gradient type method for convex optimization by utilizing a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time … Read more

A First-Order Algorithm for the A-Optimal Experimental Design Problem: A Mathematical Programming Approach

We develop and analyse a first-order algorithm for the A-optimal experimental design problem. The problem is first presented as a special case of a parametric family of optimal design problems for which duality results and optimality conditions are given. Then, two first-order (Frank-Wolfe type) algorithms are presented, accompanied by a detailed time-complexity analysis of the … Read more