On the Convergence of Constrained Gradient Method

The constrained gradient method (CGM) has recently been proposed to solve convex optimization and monotone variational inequality (VI) problems with general functional constraints. While existing literature has established convergence results for CGM, the assumptions employed therein are quite restrictive; in some cases, certain assumptions are mutually inconsistent, leading to gaps in the underlying analysis. This … Read more

Min-Max Optimization Is Strictly Easier Than Variational Inequalities

Classically, a mainstream approach for solving a convex-concave min-max problem is to instead solve the variational inequality problem arising from its first-order optimality conditions. Is it possible to solve min-max problems faster by bypassing this reduction? This paper initiates this investigation. We show that the answer is yes in the textbook setting of unconstrained quadratic … Read more

A One-Extra Player Reduction of GNEPs to NEPs

It is common opinion that generalized Nash equilibrium problems are harder than Nash equilibrium problems. In this work, we show that by adding a new player, it is possible to reduce many generalized problems to standard equilibrium problems. The reduction holds for linear problems and smooth convex problems verifying a Slater-type condition. We also derive … Read more

Lyapunov-based Analysis on First Order Method for Composite Strong-Weak Convex Functions

The Nesterov’s accelerated gradient (NAG) method generalizes the classical gradient descent algorithm by improving the convergence rate from $\mathcal{O}\left(\frac{1}{t}\right)$ to $\mathcal{O}\left(\frac{1}{t^2}\right)$ in convex optimization. This study examines the proximal gradient framework for additively separable composite functions with smooth and non-smooth components. We demonstrate that Nesterov’s accelerated proximal gradient (NAPG$_\alpha$) method attains a convergence rate of … Read more

A Practical Adaptive Subgame Perfect Gradient Method

We present a performant gradient method for smooth convex optimization, drawing inspiration from several recent advances in the field. Our algorithm, the Adaptive Subgame Perfect Gradient Method (ASPGM) is based on the notion of subgame perfection, attaining a dynamic strengthening of minimax optimality. At each iteration, ASPGM makes a momentum-type update, optimized dynamically based on … Read more

Convexification of a Separable Function over a Polyhedral Ground Set

In this paper, we study the set \(\mathcal{S}^\kappa = \{ (x,y)\in\mathcal{G}\times\mathbb{R}^n : y_j = x_j^\kappa , j=1,\dots,n\}\), where \(\kappa > 1\) and the ground set \(\mathcal{G}\) is a nonempty polytope contained in \( [0,1]^n\). This nonconvex set is closely related to separable standard quadratic programming and appears as a substructure in potential-based network flow problems … Read more

Optimization in Theory and Practice

Algorithms for continuous optimization problems have a rich history of design and innovation over the past several decades, in which mathematical analysis of their convergence and complexity properties plays a central role. Besides their theoretical properties, optimization algorithms are interesting also for their practical usefulness as computational tools for solving real-world problems. There are often … Read more

Moment-sos and spectral hierarchies for polynomial optimization on the sphere and quantum de Finetti theorems

We revisit the convergence analysis of two approximation hierarchies for polynomial optimization on the unit sphere. The first one is based on the moment-sos approach and gives semidefinite bounds for which Fang and Fawzi (2021) showed an analysis in \(O(1/r^2)\) for the r-th level bound, using the polynomial kernel method. The second hierarchy was recently … Read more

Bregman Douglas-Rachford Splitting Method

In this paper, we propose the Bregman Douglas-Rachford splitting (BDRS) method and its variant Bregman Peaceman-Rachford splitting method for solving maximal monotone inclusion problem. We show that BDRS is equivalent to a Bregman alternating direction method of multipliers (ADMM) when applied to the dual of the problem. A special case of the Bregman ADMM is … Read more

Gradient Methods with Online Scaling Part II. Practical Aspects

Part I of this work [Gao25] establishes online scaled gradient methods (OSGM), a framework that utilizes online convex optimization to adapt stepsizes in gradient methods. This paper focuses on the practical aspects of OSGM. We leverage the OSGM framework to design new adaptive first-order methods and provide insights into their empirical behavior. The resulting method, … Read more