Application of Facial Reduction to \infty$ State Feedback Control Problem

One often encounters numerical difficulties in solving linear matrix inequality (LMI) problems obtained from $H_\infty$ control problems. We discuss the reason from the viewpoint of optimization, and provide necessary and sufficient conditions for LMI problem and its dual not to be strongly feasible. Moreover, we interpret them in terms of control system. In this analysis, … Read more

Local Convergence Properties of Douglas–Rachford and ADMM

The Douglas–Rachford (DR) and alternating direction method of multipliers (ADMM) are two proximal splitting algorithms designed to minimize the sum of two proper lower semi-continuous convex functions whose proximity operators are easy to compute. The goal of this work is to understand the local linear convergence behaviour of DR/ADMM when the involved functions are moreover … Read more

Barzilai-Borwein Step Size for Stochastic Gradient Descent

One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization algorithms, the common practice in SGD is either to use a diminishing step size, or to tune a fixed step … Read more

Chebyshev Inequalities for Products of Random Variables

We derive sharp probability bounds on the tails of a product of symmetric non-negative random variables using only information about their first two moments. If the covariance matrix of the random variables is known exactly, these bounds can be computed numerically using semidefinite programming. If only an upper bound on the covariance matrix is available, … Read more

A unified convergence bound for conjugate gradient and accelerated gradient

Nesterov’s accelerated gradient method for minimizing a smooth strongly convex function $f$ is known to reduce $f(\x_k)-f(\x^*)$ by a factor of $\eps\in(0,1)$ after $k\ge O(\sqrt{L/\ell}\log(1/\eps))$ iterations, where $\ell,L$ are the two parameters of smooth strong convexity. Furthermore, it is known that this is the best possible complexity in the function-gradient oracle model of computation. The … Read more

A new customized proximal point algorithm for linearly constrained convex optimization

In this paper, we propose a new customized proximal point algorithm for linearly constrained convex optimization problem, and further use it to solve the separable convex optimization problem with linear constraints. Which is different to the existing customized proximal point algorithms, the proposed algorithm does not involve any relaxation step but still ensure the convergence. … Read more

Accelerated first-order methods for large-scale convex minimization

This paper discusses several (sub)gradient methods attaining the optimal complexity for smooth problems with Lipschitz continuous gradients, nonsmooth problems with bounded variation of subgradients, weakly smooth problems with H\”older continuous gradients. The proposed schemes are optimal for smooth strongly convex problems with Lipschitz continuous gradients and optimal up to a logarithmic factor for nonsmooth problems … Read more

An optimal first order method based on optimal quadratic averaging

In a recent paper, Bubeck, Lee, and Singh introduced a new first order method for minimizing smooth strongly convex functions. Their geometric descent algorithm, largely inspired by the ellipsoid method, enjoys the optimal linear rate of convergence. Motivated by their work, we propose a close variant that iteratively maintains a quadratic global under-estimator of the … Read more

On the Grassmann condition number

We give new insight into the Grassmann condition of the conic feasibility problem \[ x \in L \cap K \setminus\{0\}. \] Here $K\subseteq V$ is a regular convex cone and $L\subseteq V$ is a linear subspace of the finite dimensional Euclidean vector space $V$. The Grassmann condition of this problem is the reciprocal of the … Read more

Semi-Smooth Second-order Type Methods for Composite Convex Programs

The goal of this paper is to study approaches to bridge the gap between first-order and second-order type methods for composite convex programs. Our key observations are: i) Many well-known operator splitting methods, such as forward-backward splitting (FBS) and Douglas-Rachford splitting (DRS), actually define a possibly semi-smooth and monotone fixed-point mapping; ii) The optimal solutions … Read more