Graph topology invariant gradient and sampling complexity for decentralized and stochastic optimization

One fundamental problem in decentralized multi-agent optimization is the trade-off between gradient/sampling complexity and communication complexity. We propose new algorithms whose gradient and sampling complexities are graph topology invariant, while their communication complexities remain optimal. For convex smooth deterministic problems, we propose a primal dual sliding (PDS) algorithm that computes an $\epsilon$-solution with $O((\tilde{L}/\epsilon)^{1/2})$ gradient … Read more

Affine invariant convergence rates of the conditional gradient method

We show that the conditional gradient method for the convex composite problem \[\min_x\{f(x) + \Psi(x)\}\] generates primal and dual iterates with a duality gap converging to zero provided a suitable growth property holds and the algorithm makes a judicious choice of stepsizes. The rate of convergence of the duality gap to zero ranges from sublinear … Read more

Unmatched Preconditioning of the Proximal Gradient Algorithm

This works addresses the resolution of penalized least-squares problems using the proximal gradient algorithm (PGA). It is known that PGA can be accelerated by preconditioning strategies. However, typical effective choices of preconditioners may correspond to intricate matrices that are not easily inverted, and lead to an increased complexity in the computation of the proximity step. … Read more

A Stochastic Bregman Primal-Dual Splitting Algorithm for Composite Optimization

We study a stochastic first order primal-dual method for solving convex-concave saddle point problems over real reflexive Banach spaces using Bregman divergences and relative smoothness assumptions, in which we allow for stochastic error in the computation of gradient terms within the algorithm. We show ergodic convergence in expectation of the Lagrangian optimality gap with a … Read more

A nested primal–dual FISTA-like scheme for composite convex optimization problems

We propose a nested primal–dual algorithm with extrapolation on the primal variable suited for minimizing the sum of two convex functions, one of which is continuously differentiable. The proposed algorithm can be interpreted as an inexact inertial forward–backward algorithm equipped with a prefixed number of inner primal–dual iterations for the proximal evaluation and a “warm–start” … Read more

Training Structured Neural Networks Through Manifold Identification and Variance Reduction

This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we … Read more

An active signature method for constrained abs-linear minimization

In this paper we consider the solution of optimization tasks with a piecewise linear objective function and piecewise linear constraints. First, we state optimality conditions for that class of problems using the abs-linearization approach and prove that they can be verified in polynomial time. Subsequently, we propose an algorithm called Constrained Active Signature Method that … Read more

Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization

For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging. Even when the objective is smooth at every iterate, the corresponding local models are unstable and the number of cutting planes invoked by traditional remedies is … Read more

Analysis non-sparse recovery for non-convex relaxed $\ell_q$ minimization

This paper studies construction of signals, which are sparse or nearly sparse with respect to a tight frame $D$ from underdetermined linear systems. In the paper, we propose a non-convex relaxed $\ell_q(0 Article Download View Analysis non-sparse recovery for non-convex relaxed $ell_q$ minimization

Nonlinear conjugate gradient for smooth convex functions

The method of nonlinear conjugate gradients (NCG) is widely used in practice for unconstrained optimization, but it satisfies weak complexity bounds at best when applied to smooth convex functions. In contrast, Nesterov’s accelerated gradient (AG) method is optimal up to constant factors for this class. However, when specialized to quadratic function, conjugate gradient is optimal … Read more