Information Complexity of Mixed-integer Convex Optimization

We investigate the information complexity of mixed-integer convex optimization under different types of oracles. We establish new lower bounds for the standard first-order oracle, improving upon the previous best known lower bound. This leaves only a lower order linear term (in the dimension) as the gap between the lower and upper bounds. This is derived … Read more

Duality of upper bounds in stochastic dynamic programming

For multistage stochastic programming problems with stagewise independent uncertainty, dynamic programming algorithms calculate polyhedral approximations for the value functions at each stage.  The SDDP algorithm provides piecewise linear lower bounds, in the spirit of the L-shaped algorithm, and corresponding upper bounds took a longer time to appear.  One strategy uses the primal dynamic programming recursion … Read more

Exact convergence rate of the last iterate in subgradient methods

\(\) We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients. We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at … Read more

Provably Faster Gradient Descent via Long Steps

This work establishes provably faster convergence rates for gradient descent in smooth convex optimization via a computer-assisted analysis technique. Our theory allows nonconstant stepsize policies with frequent long steps potentially violating descent by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We … Read more

Searching for Optimal Per-Coordinate Step-sizes with Multidimensional Backtracking

The backtracking line-search is an effective technique to automatically tune the step-size in smooth optimization. It guarantees similar performance to using the theoretically optimal step-size. Many approaches have been developed to instead tune per-coordinate step-sizes, also known as diagonal preconditioners, but none of the existing methods are provably competitive with the optimal per-coordinate stepsizes. We … Read more

Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees

We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates. Our results shorten, streamline and strengthen existing convergence proofs for several asynchronous optimization … Read more

The alternating simultaneous Halpern-Lions-Wittmann-Bauschke algorithm for finding the best approximation pair for two disjoint intersections of convex sets

Given two nonempty and disjoint intersections of closed and convex subsets, we look for a best approximation pair relative to them, i.e., a pair of points, one in each intersection, attaining the minimum distance between the disjoint intersections. We propose an iterative process based on projections onto the subsets which generate the intersections. The process … Read more

Numerical Methods for Convex Multistage Stochastic Optimization

\(\) Optimization problems involving sequential decisions in  a  stochastic environment    were studied  in  Stochastic Programming (SP), Stochastic Optimal Control  (SOC) and Markov Decision Processes (MDP). In this paper we mainly concentrate on SP and  SOC modelling   approaches. In these frameworks there are natural situations  when the considered problems are  convex. Classical approach to sequential … Read more

Balancing Communication and Computation in Gradient Tracking Algorithms for Decentralized Optimization

Gradient tracking methods have emerged as one of the most popular approaches for solving decentralized optimization problems over networks. In this setting, each node in the network has a portion of the global objective function, and the goal is to collectively optimize this function. At every iteration, gradient tracking methods perform two operations (steps): (1) … Read more

An Inexact Proximal-indefinite Stochastic ADMM with applications in 3D CT reconstruction

In this paper, we develop an Inexact Proximal-indefinite Stochastic ADMM (abbreviated as IPS-ADMM) for solving a class of separable convex optimization problems whose objective functions consist of two parts: one is an average of many smooth convex functions and another is a convex but possibly nonsmooth function. The involved smooth subproblem is tackled by an … Read more