Exact convergence rate of the last iterate in subgradient methods

We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients. We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at each … Read more

Provably Faster Gradient Descent via Long Steps

This work establishes provably faster convergence rates for gradient descent in smooth convex optimization via a computer-assisted analysis technique. Our theory allows nonconstant stepsize policies with frequent long steps potentially violating descent by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We … Read more

Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking

The backtracking line-search is an effective technique to automatically tune the step-size in smooth optimization. It guarantees similar performance to using the theoretically optimal step-size. Many approaches have been developed to instead tune per-coordinate step-sizes, also known as diagonal preconditioners, but none of the existing methods are provably competitive with the optimal per-coordinate stepsizes. We … Read more

Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees

We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates. Our results shorten, streamline and strengthen existing convergence proofs for several asynchronous optimization … Read more

The alternating simultaneous Halpern-Lions-Wittmann-Bauschke algorithm for finding the best approximation pair for two disjoint intersections of convex sets

Given two nonempty and disjoint intersections of closed and convex subsets, we look for a best approximation pair relative to them, i.e., a pair of points, one in each intersection, attaining the minimum distance between the disjoint intersections. We propose an iterative process based on projections onto the subsets which generate the intersections. The process … Read more

Numerical Methods for Convex Multistage Stochastic Optimization

Optimization problems involving sequential decisions in  a  stochastic environment    were studied  in  Stochastic Programming (SP), Stochastic Optimal Control  (SOC) and Markov Decision Processes (MDP). In this paper we mainly concentrate on SP and  SOC modelling   approaches. In these frameworks there are natural situations  when the considered problems are  convex. Classical approach to sequential optimization … Read more

Balancing Communication and Computation in Gradient Tracking Algorithms for Decentralized Optimization

Gradient tracking methods have emerged as one of the most popular approaches for solving decentralized optimization problems over networks. In this setting, each node in the network has a portion of the global objective function, and the goal is to collectively optimize this function. At every iteration, gradient tracking methods perform two operations (steps): (1) … Read more

Convergence analysis on a data-driven inexact proximal-indefinite stochastic ADMM

In this paper, we propose an Inexact Proximal-indefinite Stochastic ADMM (abbreviated as IPS-ADMM) to solve a class of separable convex optimization problems whose objective functions consist of two parts: one is an average of many smooth convex functions and the other is a convex but potentially nonsmooth function. The involved smooth subproblem is tackled by … Read more

A successive centralized circumcentered-reflection method for the convex feasibility problem

In this paper, we present a successive centralization process for the circumcentered-reflection scheme with several control sequences for solving the convex feasibility problem in Euclidean space. Assuming that a standard error bound holds, we prove the linear convergence of the method with the most violated constraint control sequence. Moreover, under additional smoothness assumptions on the … Read more

Projection free methods on product domains

Projection-free block-coordinate methods avoid high computational cost per iteration and at the same time exploit the particular problem structure of product domains. Frank-Wolfe-like approaches rank among the most popular ones of this type. However, as observed in the literature, there was a gap between the classical Frank-Wolfe theory and the block-coordinate case. Moreover, most of … Read more