Universal Conditional Gradient Sliding for Convex Optimization

In this paper, we present a first-order projection-free method, namely, the universal conditional gradient sliding (UCGS) method, for solving ε-approximate solutions to convex differentiable optimization problems. For objective functions with Hölder continuous gradients, we show that UCGS is able to terminate with ε-solutions with at most O((1/ε)^(2/(1+3v))) gradient evaluations and O((1/ε)^(4/(1+3v))) linear objective optimizations, where … Read more

String-Averaging Methods for Best Approximation to Common Fixed Point Sets of Operators: The Finite and Infinite Cases

Abstract String-averaging is an algorithmic structure used when handling a family of operators in situations where the algorithm at hand requires to employ the operators in a specific order. Sequential orderings are well-known and a simultaneous order means that all operators are used simultaneously (in parallel). String-averaging allows to use strings of indices, constructed by … Read more

Semi-Discrete Optimal Transport: Hardness, Regularization and Numerical Solution

Semi-discrete optimal transport problems, which evaluate the Wasserstein distance between a discrete and a generic (possibly non-discrete) probability measure, are believed to be computationally hard. Even though such problems are ubiquitous in statistics, machine learning and computer vision, however, this perception has not yet received a theoretical justification. To fill this gap, we prove that … Read more

Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming

We study the problem of detecting infeasibility of large-scale linear programming problems using the primal-dual hybrid gradient method (PDHG) of Chambolle and Pock (2011). The literature on PDHG has mostly focused on settings where the problem at hand is assumed to be feasible. When the problem is not feasible, the iterates of the algorithm do … Read more

Sparse Approximations with Interior Point Methods

Large-scale optimization problems that seek sparse solutions have become ubiquitous. They are routinely solved with various specialized first-order methods. Although such methods are often fast, they usually struggle with not-so-well conditioned problems. In this paper, specialized variants of an interior point-proximal method of multipliers are proposed and analyzed for problems of this class. Computational experience … Read more

An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems

In this paper, we propose a new method for a class of difference-of-convex (DC) optimization problems, whose objective is the sum of a smooth function and a possibly non-prox-friendly DC function. The method sequentially solves subproblems constructed from a quadratic approximation of the smooth function and a linear majorization of the concave part of the … Read more

How do exponential size solutions arise in semidefinite programming?

Semidefinite programs (SDPs) are some of the most popular and broadly applicable optimization problems to emerge in the last thirty years. A curious pathology of SDPs, illustrated by a classical example of Khachiyan, is that their solutions may need exponential space to even write down. Exponential size solutions are the main obstacle to solve a … Read more

Implicit Regularization of Sub-Gradient Method in Robust Matrix Recovery: Don’t be Afraid of Outliers

It is well-known that simple short-sighted algorithms, such as gradient descent, generalize well in the over-parameterized learning tasks, due to their implicit regularization. However, it is unknown whether the implicit regularization of these algorithms can be extended to robust learning tasks, where a subset of samples may be grossly corrupted with noise. In this work, … Read more

Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods

For solving strongly convex optimization problems, we propose and study the global convergence of variants of the A-HPE and large-step A-HPE algorithms of Monteiro and Svaiter. We prove \emph{linear} and the \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ global rates for the proposed variants of the A-HPE and large-step A-HPE methods, respectively. The parameter $p\geq 2$ appears in the (high-order) … Read more

On Solving Elliptic Obstacle Problems by Compact Abs-Linearization

We consider optimal control problems governed by an elliptic variational inequality of the first kind, namely the obstacle problem. The variational inequality is treated by penalization which leads to optimization problems governed by a nonsmooth semi- linear elliptic PDE. The CALi algorithm is then applied for the efficient solution of these nonsmooth optimization problems. The … Read more