Fast convergence of the primal-dual dynamical system and algorithms for a nonsmooth bilinearly coupled saddle point problem

\(\) This paper is devoted to study the convergence rates of a second-order dynamical system and its corresponding discretizations associated with a nonsmooth bilinearly coupled convex-concave saddle point problem. We derive the convergence rate of the primal-dual gap for the second-order dynamical system with asymptotically vanishing damping term. Based on the implicit discretization, we propose … Read more

Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The (structured) saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting (PDS) algorithm with loopless variance-reduction (VR) for solving this generic problem. A PDS algorithm aims to overcome the well-known shortcomings of common splitting methods by solving the … Read more

Fast convergence of inertial primal-dual dynamics and algorithms for a bilinearly coupled saddle point problem

This paper is devoted to study the convergence rates of a second-order dynamical system and its corresponding discretization associated with a continuously differentiable bilinearly coupled convex-concave saddle point problem. First, we consider the second-order dynamical system with asymptotically vanishing damping term and show the existence and uniqueness of the trajectories as global twice continuously differentiable … Read more

A generalized asymmetric forward-backward-adjoint algorithm for convex-concave saddle-point problem

The convex-concave minimax problem, known as the saddle-point problem, has been extensively studied from various aspects including the algorithm design, convergence condition and complexity. In this paper, we propose a generalized asymmetric forward-backward-adjoint (G-AFBA) algorithm to solve such a problem by utilizing both the proximal techniques and the interactive information of primal-dual updates. Except enjoying … Read more

Graph topology invariant gradient and sampling complexity for decentralized and stochastic optimization

One fundamental problem in decentralized multi-agent optimization is the trade-off between gradient/sampling complexity and communication complexity. We propose new algorithms whose gradient and sampling complexities are graph topology invariant, while their communication complexities remain optimal. For convex smooth deterministic problems, we propose a primal dual sliding (PDS) algorithm that computes an $\epsilon$-solution with $O((\tilde{L}/\epsilon)^{1/2})$ gradient … Read more

On the Convergence of Projected Alternating Maximization for Equitable and Optimal Transport

This paper studies the equitable and optimal transport (EOT) problem, which has many applications such as fair division problems and optimal transport with multiple agents etc. In the discrete distributions case, the EOT problem can be formulated as a linear program (LP). Since this LP is prohibitively large for general LP solvers, Scetbon \etal \cite{scetbon2021equitable} … Read more

Equivalent second-order cone programs for distributionally robust zero-sum games

We consider a two player zero-sum game with stochastic linear constraints. The probability distributions of the vectors associated with the constraints are partially known. The available information with respect to the distribution is based mainly on the two first moments. In this vein, we formulate the stochastic linear constraints as distributionally robust chance constraints. We … Read more

A search direction inspired primal-dual method for saddle point problems

The primal-dual hybrid gradient algorithm (PDHG), which is indeed the Arrow-Hurwicz method, has been widely used in image processing areas. However, the convergence of PDHG was established only under some restrictive conditions in the literature, and it is still missing for the case without extra constraints. In this paper, from a perspective of the variational … Read more

Stochastic model-based minimization under high-order growth

Given a nonsmooth, nonconvex minimization problem, we consider algorithms that iteratively sample and minimize stochastic convex models of the objective function. Assuming that the one-sided approximation quality and the variation of the models is controlled by a Bregman divergence, we show that the scheme drives a natural stationarity measure to zero at the rate $O(k^{-1/4})$. … Read more

Interior Point Methods and Preconditioning for PDE-Constrained Optimization Problems Involving Sparsity Terms

PDE-constrained optimization problems with control or state constraints are challenging from an analytical as well as numerical perspective. The combination of these constraints with a sparsity-promoting L1 term within the objective function requires sophisticated optimization methods. We propose the use of an Interior Point scheme applied to a smoothed reformulation of the discretized problem, and … Read more