Negative Stepsizes Make Gradient-Descent-Ascent Converge

Efficient computation of min-max problems is a central question in optimization, learning, games, and controls. Arguably the most natural algorithm is gradient-descent-ascent (GDA). However, since the 1970s, conventional wisdom has argued that GDA fails to converge even on simple problems. This failure spurred an extensive literature on modifying GDA with additional building blocks such as … Read more

On the Acceleration of Proximal Bundle Methods

The proximal bundle method (PBM) is a fundamental and computationally effective algorithm for solving nonsmooth optimization problems. In this paper, we present the first variant of the PBM for smooth objectives, achieving an accelerated convergence rate of \(\frac{1}{\sqrt{\epsilon}}\log(\frac{1}{\epsilon})\), where \(\epsilon\) is the desired accuracy. Our approach addresses an open question regarding the convergence guarantee of … Read more

An iterative process for the feasibility-seeking problem with sets that are unions of convex sets

In this paper we deal with the feasibility-seeking problem for unions of convex sets (UCS) sets and propose an iterative process for its solution. Renewed interest in this problem stems from the fact that it was recently discovered to serve as a modeling approach in fields of applications and from the ongoing recent research efforts … Read more

Extending Exact Convex Relaxations of Quadratically Constrained Quadratic Programs

A convex relaxation of a quadratically constrained quadratic program (QCQP) is called exact if it has a rank-$1$ optimal solution that corresponds to an optimal solution of the  QCQP. Given a QCQP whose convex relaxation is exact,  this paper investigates the incorporation of additional quadratic inequality constraints under a non-intersecting quadratic constraint condition while maintaining … Read more

qpBAMM: a parallelizable ADMM approach for block-structured quadratic programs

Block-structured quadratic programs (QPs) frequently arise in the context of the direct approach to solving optimal control problems. For successful application of direct optimal control algorithms to many real-world problems it is paramount that these QPs can be solved efficiently and reliably. Besides interior-point methods and active-set methods, ADMM-based quadratic programming approaches have gained popularity. … Read more

Local Linear Convergence of the Alternating Direction Method of Multipliers for Semidefinite Programming under Strict Complementarity

We investigate the local linear convergence properties of the Alternating Direction Method of Multipliers (ADMM) when applied to Semidefinite Programming (SDP). A longstanding belief suggests that ADMM is only capable of solving SDPs to moderate accuracy, primarily due to its sublinear worst-case complexity and empirical observations of slow convergence. We challenge this notion by introducing … Read more

A Rank-One-Update Method for the Training of Support Vector Machines

This paper considers convex quadratic programs associated with the training of support vector machines (SVM). Exploiting the special structure of the SVM problem a new type of active set method with long cycles and stable rank-one-updates is proposed and tested (CMU: cycling method with updates). The structure of the problem allows for a repeated simple … Read more

A Universally Optimal Primal-Dual Method for Minimizing Heterogeneous Compositions

This paper proposes a universal, optimal algorithm for convex minimization problems of the composite form $g_0(x)+h(g_1(x),\dots, g_m(x)) + u(x)$. We allow each $g_j$ to independently range from being nonsmooth Lipschitz to smooth, from convex to strongly convex, described by notions of H\”older continuous gradients and uniform convexity. Note that, although the objective is built from … Read more

Interior-point algorithms with full Newton steps for nonsymmetric convex conic optimization

We design and analyze primal-dual, feasible interior-point algorithms (IPAs) employing full Newton steps to solve convex optimization problems in standard conic form. Unlike most nonsymmetric cone programming methods, the algorithms presented in this paper require only a logarithmically homogeneous self-concordant barrier (LHSCB) of the primal cone, but compute feasible and \(\varepsilon\)-optimal solutions to both the … Read more

Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent

This paper investigates the convergence properties of the hypergradient descent method (HDM), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of HDM using the online learning framework of [Gao24] and apply this analysis to develop new state-of-the-art adaptive gradient methods with empirical and … Read more