Graph topology invariant gradient and sampling complexity for decentralized and stochastic optimization

One fundamental problem in decentralized multi-agent optimization is the trade-off between gradient/sampling complexity and communication complexity. We propose new algorithms whose gradient and sampling complexities are graph topology invariant, while their communication complexities remain optimal. For convex smooth deterministic problems, we propose a primal dual sliding (PDS) algorithm that computes an $\epsilon$-solution with $O((\tilde{L}/\epsilon)^{1/2})$ gradient … Read more

Mirror-prox sliding methods for solving a class of monotone variational inequalities

In this paper we propose new algorithms for solving a class of structured monotone variational inequality (VI) problems over compact feasible sets. By identifying the gradient components existing in the operator of VI, we show that it is possible to skip computations of the gradients from time to time, while still maintaining the optimal iteration … Read more

Universal Conditional Gradient Sliding for Convex Optimization

In this paper, we present a first-order projection-free method, namely, the universal conditional gradient sliding (UCGS) method, for solving ε-approximate solutions to convex differentiable optimization problems. For objective functions with Hölder continuous gradients, we show that UCGS is able to terminate with ε-solutions with at most O((1/ε)^(2/(1+3v))) gradient evaluations and O((1/ε)^(4/(1+3v))) linear objective optimizations, where … Read more

Accelerated gradient sliding for structured convex optimization

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show … Read more