A Simplified Form of Block-Iterative Operator Splitting, and an Asynchronous Algorithm Resembling the Multi-Block ADMM

This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then describes one way of implementing this algorithm asynchronously under a computing model inspired by modern HPC environments, which consist of interconnected nodes each having … Read more

1-Bit Compressive Sensing: Reformulation and RRSP-Based Sign Recovery Theory

Recently, the 1-bit compressive sensing (1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. In this paper, we first … Read more

A bound on the Carathéodory number

The Carathéodory number k(K) of a pointed closed convex cone K is the minimum among all the k for which every element of K can be written as a nonnegative linear combination of at most k elements belonging to extreme rays. Carathéodory’s Theorem gives the bound k(K) <= dim (K). In this work we observe … Read more

On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions

We consider the gradient (or steepest) descent method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also extend the result to a noisy … Read more

Exact Worst-case Performance of First-order Methods for Composite Convex Optimization

We provide a framework for computing the exact worst-case performance of any algorithm belonging to a broad class of oracle-based first-order methods for composite convex optimization, including those performing explicit, projected, proximal, conditional and inexact (sub)gradient steps. We simultaneously obtain tight worst-case guarantees and explicit instances of optimization problems on which the algorithm reaches this … Read more

Convergence Analysis of ISTA and FISTA for “Strongly + Semi” Convex Programming

The iterative shrinkage/thresholding algorithm (ISTA) and its faster version FISTA have been widely used in the literature. In this paper, we consider general versions of the ISTA and FISTA in the more general “strongly + semi” convex setting, i.e., minimizing the sum of a strongly convex function and a semiconvex function; and conduct convergence analysis … Read more

Efficient Subgradient Methods for General Convex Optimization

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a line-search at each iteration, avoiding the need for orthogonal projections onto the feasible region … Read more

Application of Facial Reduction to \infty$ State Feedback Control Problem

One often encounters numerical difficulties in solving linear matrix inequality (LMI) problems obtained from $H_\infty$ control problems. We discuss the reason from the viewpoint of optimization, and provide necessary and sufficient conditions for LMI problem and its dual not to be strongly feasible. Moreover, we interpret them in terms of control system. In this analysis, … Read more

Local Convergence Properties of Douglas–Rachford and ADMM

The Douglas–Rachford (DR) and alternating direction method of multipliers (ADMM) are two proximal splitting algorithms designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local linear convergence behaviour of DR/ADMM when the involved functions are moreover … Read more

Barzilai-Borwein Step Size for Stochastic Gradient Descent

One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization algorithms, the common practice in SGD is either to use a diminishing step size, or to tune a fixed step … Read more