Solving Conic Systems via Projection and Rescaling

We propose a simple {\em projection and algorithm} to solve the feasibility problem \[ \text{ find } x \in L \cap \Omega, \] where $L$ and $\Omega$ are respectively a linear subspace and the interior of a symmetric cone in a finite-dimensional vector space $V$. This projection and rescaling algorithm is inspired by previous work … Read more

Perturbation of error bounds

Our aim in the current article is to extend the developments in Kruger, Ngai & Th\’era, SIAM J. Optim. 20(6), 3280–3296 (2010) and, more precisely, to characterize, in the Banach space setting, the stability of the local and global error bound property of inequalities determined by proper lower semicontinuous under data perturbations. We propose new … Read more

SMART: The Stochastic Monotone Aggregated Root-Finding Algorithm

We introduce the Stochastic Monotone Aggregated Root-Finding (SMART) algorithm, a new randomized operator-splitting scheme for finding roots of finite sums of operators. These algorithms are similar to the growing class of incremental aggregated gradient algorithms, which minimize finite sums of functions; the difference is that we replace gradients of functions with black-boxes called operators, which … Read more

New Douglas-Rachford algorithmic structures and their convergence analyses

In this paper we study new algorithmic structures with Douglas- Rachford (DR) operators to solve convex feasibility problems. We propose to embed the basic two-set-DR algorithmic operator into the String-Averaging Projections (SAP) and into the Block-Iterative Pro- jection (BIP) algorithmic structures, thereby creating new DR algo- rithmic schemes that include the recently proposed cyclic Douglas- … Read more

Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the $\ell_1$-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first … Read more

Sparse Recovery via Partial Regularization: Models, Theory and Algorithms

In the context of sparse recovery, it is known that most of existing regularizers such as $\ell_1$ suffer from some bias incurred by some leading entries (in magnitude) of the associated vector. To neutralize this bias, we propose a class of models with partial regularizers for recovering a sparse solution of a linear system. We … Read more

Acceleration of the PDHGM on strongly convex subspaces

We propose several variants of the primal-dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, $O(1/N^2)$ with respect to initialisation and $O(1/N)$ with respect to the dual sequence, and the residual part of the primal … Read more

Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints

Consider convex optimization problems subject to a large number of constraints. We focus on stochastic problems in which the objective takes the form of expected values and the feasible set is the intersection of a large number of convex sets. We propose a class of algorithms that perform both stochastic gradient descent and random feasibility … Read more

First and second order optimality conditions for piecewise smooth objective functions

Any piecewise smooth function that is specified by an evaluation procedures involving smooth elemental functions and piecewise linear functions like min and max can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic differentiation, one can then compute certain first and second order derivative vectors and matrices that represent a … Read more

An Extended Frank-Wolfe Method with “In-Face” Directions, and its Application to Low-Rank Matrix Completion

We present an extension of the Frank-Wolfe method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region. We present computational guarantees for the method that trade off efficiency in computing near-optimal solutions with upper bounds on the dimension of minimal faces of iterates. We apply our method to the low-rank … Read more