Regularized Nonlinear Acceleration

We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the … Read more

Sharpness, Restart and Acceleration.

The Lojasievicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Here, we show that sharpness directly controls the performance of restart schemes. The constants quantifying sharpness are of course unobservable, but we show that optimal restart strategies are fairly robust, and searching for the best scheme only increases … Read more

Integration Methods and Accelerated Optimization Algorithms

We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation. In comparison with recent advances in this vein, the differential equation considered here is the basic gradient flow and we show that multi-step schemes allow integration of this differential equation … Read more

A simplicial decomposition framework for large scale convex quadratic programming

In this paper, we analyze in depth a simplicial decomposition like algorithmic framework for large scale convex quadratic programming. In particular, we first propose two tailored strategies for handling the master problem. Then, we describe a few techniques for speeding up the solution of the pricing problem. We report extensive numerical experiments on both real … Read more

Oracle Complexity of Second-Order Methods for Smooth Convex Optimization

Second-order methods, which utilize gradients as well as Hessians to optimize a given function, are of major importance in mathematical optimization. In this work, we study the oracle complexity of such methods, or equivalently, the number of iterations required to optimize a function to a given accuracy. Focusing on smooth and convex functions, we derive … Read more

Bad semidefinite programs with short proofs, and the closedness of the linear image of the semidefinite cone

Semidefinite programs (SDPs) — some of the most useful and pervasive optimization problems of the last few decades — often behave pathologically: the optimal values of the primal and dual problems may differ and may not be attained. Such SDPs are theoretically interesting and often impossible to solve. Yet, the pathological SDPs in the literature … Read more

An Investigation of Newton-Sketch and Subsampled Newton Methods

Sketching, a dimensionality reduction technique, has received much attention in the statistics community. In this paper, we study sketching in the context of Newton’s method for solving finite-sum optimization problems in which the number of variables and data points are both large. We study two forms of sketching that perform dimensionality reduction in data space: … Read more

The Many Faces of Degeneracy in Conic Optimization

Slater’s condition — existence of a “strictly feasible solution” — is a common assumption in conic optimization. Without strict feasibility, first-order optimality conditions may be meaningless, the dual problem may yield little information about the primal, and small changes in the data may render the problem infeasible. Hence, failure of strict feasibility can negatively impact … Read more

On the pointwise iteration-complexity of a dynamic regularized ADMM with over-relaxation stepsize

In this paper, we extend the improved pointwise iteration-complexity result of a dynamic regularized alternating direction method of multipliers (ADMM) for a new stepsize domain. In this complexity analysis, the stepsize parameter can even be chosen in the interval $(0,2)$ instead of interval $(0,(1+\sqrt{5})/2)$. As usual, our analysis is established by interpreting this ADMM variant … Read more

ADMM for monotone operators: convergence analysis and rates

We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primal-dual algorithms for monotone inclusions and also the classical ADMM numerical scheme for convex optimization problems, … Read more