Randomized Primal-Dual Proximal Block Coordinate Updates

In this paper we propose a randomized primal-dual proximal block coordinate updating framework for a general multi-block convex optimization model with coupled objective function and linear constraints. Assuming mere convexity, we establish its $O(1/t)$ convergence rate in terms of the objective value and feasibility measure. The framework includes several existing algorithms as special cases such … Read more

Chebyshev Inequalities for Products of Random Variables

We derive sharp probability bounds on the tails of a product of symmetric non-negative random variables using only information about their first two moments. If the covariance matrix of the random variables is known exactly, these bounds can be computed numerically using semidefinite programming. If only an upper bound on the covariance matrix is available, … Read more

Virtuous smoothing for global optimization

In the context of global optimization and mixed-integer non-linear programming, generalizing a technique of D’Ambrosio, Fampa, Lee and Vigerske for handling the square-root function, we develop a virtuous smoothing method, using cubics, aimed at functions having some limited non-smoothness. Our results pertain to root functions ($w^p$ with $0

A unified convergence bound for conjugate gradient and accelerated gradient

Nesterov’s accelerated gradient method for minimizing a smooth strongly convex function $f$ is known to reduce $f(\x_k)-f(\x^*)$ by a factor of $\eps\in(0,1)$ after $k\ge O(\sqrt{L/\ell}\log(1/\eps))$ iterations, where $\ell,L$ are the two parameters of smooth strong convexity. Furthermore, it is known that this is the best possible complexity in the function-gradient oracle model of computation. The … Read more

A new customized proximal point algorithm for linearly constrained convex optimization

In this paper, we propose a new customized proximal point algorithm for linearly constrained convex optimization problem, and further use it to solve the separable convex optimization problem with linear constraints. Which is different to the existing customized proximal point algorithms, the proposed algorithm does not involve any relaxation step but still ensure the convergence. … Read more

Accelerated first-order methods for large-scale convex minimization

This paper discusses several (sub)gradient methods attaining the optimal complexity for smooth problems with Lipschitz continuous gradients, nonsmooth problems with bounded variation of subgradients, weakly smooth problems with H\”older continuous gradients. The proposed schemes are optimal for smooth strongly convex problems with Lipschitz continuous gradients and optimal up to a logarithmic factor for nonsmooth problems … Read more

An optimal first order method based on optimal quadratic averaging

In a recent paper, Bubeck, Lee, and Singh introduced a new first order method for minimizing smooth strongly convex functions. Their geometric descent algorithm, largely inspired by the ellipsoid method, enjoys the optimal linear rate of convergence. Motivated by their work, we propose a close variant that iteratively maintains a quadratic global under-estimator of the … Read more

Pessimistic bilevel linear optimization

In this paper, we investigate the pessimistic bilevel linear optimization problem (PBLOP). Based on the lower level optimal value function and duality, the PBLOP can be transformed to a single-level while nonconvex and nonsmooth optimization problem. By use of linear optimization duality, we obtain a tractable and equivalent transformation and propose algorithms for computing global … Read more

On the Grassmann condition number

We give new insight into the Grassmann condition of the conic feasibility problem \[ x \in L \cap K \setminus\{0\}. \] Here $K\subseteq V$ is a regular convex cone and $L\subseteq V$ is a linear subspace of the finite dimensional Euclidean vector space $V$. The Grassmann condition of this problem is the reciprocal of the … Read more

Algorithms for stochastic optimization with expectation constraints

This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with an expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the expectation constraint on devision variables. We show that … Read more