Accessible Theoretical Complexity of the Restarted Primal-Dual Hybrid Gradient Method for Linear Programs with Unique Optima

\(\) The restarted primal-dual hybrid gradient method (rPDHG) has recently emerged as an important tool for solving large-scale linear programs (LPs). For LPs with unique optima, we present an iteration bound of \(\widetilde{O}\left(\kappa\Phi\cdot\ln\left(\frac{\|w^*\|}{\varepsilon}\right)\right)\), where \(\varepsilon\) is the target tolerance, \(\kappa\) is the standard matrix condition number, \(\|w^*\|\) is the norm of the optimal solution, and … Read more

First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ-stationary point at a convergence rate of O(ϵ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by … Read more

A first-order augmented Lagrangian method for constrained minimax optimization

\(\) In this paper we study a class of constrained minimax problems. In particular, we propose a first-order augmented Lagrangian method for solving them, whose subproblems turn out to be a much simpler structured minimax problem and are suitably solved by a first-order method recently developed in [26] by the authors. Under some suitable assumptions, … Read more

Accelerated first-order methods for a class of semidefinite programs

This paper introduces a new storage-optimal first-order method (FOM), CertSDP, for solving a special class of semidefinite programs (SDPs) to high accuracy. The class of SDPs that we consider, the exact QMP-like SDPs , is characterized by low-rank solutions, a priori knowledge of the restriction of the SDP solution to a small subspace, and standard … Read more

Limited-memory Common-directions Method for Large-scale Optimization: Convergence, Parallelization, and Distributed Optimization

In this paper, we present a limited-memory common-directions method for smooth optimization that interpolates between first- and second- order methods. At each iteration, a subspace of a limited dimension size is constructed using first-order information from previous iterations, and an ef- ficient Newton method is deployed to find an approximate minimizer within this subspace. With … Read more