Convergence Analysis of Processes with Valiant Projection Operators in Hilbert Space

Convex feasibility problems require to find a point in the intersection of a finite family of convex sets. We propose to solve such problems by performing set-enlargements and applying a new kind of projection operators called valiant projectors. A valiant projector onto a convex set implements a special relaxation strategy, proposed by Goffin in 1971, … Read more

Proximal-Proximal-Gradient Method

In this paper, we present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG generalizes the proximal-gradient method and ADMM and is applicable to minimization problems written as a sum of many differentiable and many non-differentiable convex functions. The non-differentiable functions can be coupled. We furthermore … Read more

Finding a best approximation pair of points for two polyhedra

Given two disjoint convex polyhedra, we look for a best approximation pair relative to them, i.e., a pair of points, one in each polyhedron, attaining the minimum distance between the sets. Cheney and Goldstein showed that alternating projections onto the two sets, starting from an arbitrary point, generate a sequence whose two interlaced subsequences converge … Read more

Implementing the ADMM to Big Datasets: A Case Study of LASSO

The alternating direction method of multipliers (ADMM) has been popularly used for a wide range of applications in the literature. When big datasets with high-dimensional variables are considered, subproblems arising from the ADMM must be solved inexactly even though theoretically they may have closed-form solutions. Such a scenario immediately poses mathematical ambiguities such as how … Read more

Randomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods (Coordinate Descent, Directional Search, Derivative-Free Method)

In this paper, we consider smooth convex optimization problems with simple constraints and inexactness in the oracle information such as value, partial or directional derivatives of the objective function. We introduce a unifying framework, which allows to construct different types of accelerated randomized methods for such problems and to prove convergence rate theorems for them. … Read more

Convergence of first-order methods via the convex conjugate

This paper gives a unified and succinct approach to the $O(1/\sqrt{k}), O(1/k),$ and $O(1/k^2)$ convergence rates of the subgradient, gradient, and accelerated gradient methods for unconstrained convex minimization. In the three cases the proof of convergence follows from a generic bound defined by the convex conjugate of the objective function. CitationWorking Paper, Carnegie Mellon UniversityArticleDownload … Read more

A Robust Multi-Batch L-BFGS Method for Machine Learning

This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time. A similar challenge occurs in a multi-batch approach in which … Read more

Improved proximal ADMM with partially parallel splitting for multi-block separable convex programming

For a type of multi-block separable convex programming raised in machine learning and statistical inference, we propose a proximal alternating direction method of multiplier (PADMM) with partially parallel splitting, which has the following nice properties: (1) To alleviate the weight of the proximal terms, the restrictions imposed on the proximal parameters are relaxed substantively; (2) … Read more

Iteration complexity on the Generalized Peaceman-Rachford splitting method for separable convex programming

Recently, a generalized version of Peaceman-Rachford splitting method (GPRSM) for solving a convex minmization model with a general separable structure has been proposed by \textbf{Sun} et al and its global convergence has been proved. In this paper, we further study theoretical aspects of the generalized Peaceman-Rachford splitting method. We first establish the worst-case $\mathcal{O}(1/t)$ convergence … Read more

Inexact cuts for Deterministic and Stochastic Dual Dynamic Programming applied to convex nonlinear optimization problems

We introduce an extension of Dual Dynamic Programming (DDP) to solve convex nonlinear dynamic programming equations. We call Inexact DDP (IDDP) this extension which applies to situations where some or all primal and dual subproblems to be solved along the iterations of the method are solved with a bounded error. We show that any accumulation … Read more