A Practical Adaptive Subgame Perfect Gradient Method

We present a performant gradient method for smooth convex optimization, drawing inspiration from several recent advances in the field. Our algorithm, the Adaptive Subgame Perfect Gradient Method (ASPGM) is based on the notion of subgame perfection, attaining a dynamic strengthening of minimax optimality. At each iteration, ASPGM makes a momentum-type update, optimized dynamically based on … Read more

Convexification of a Separable Function over a Polyhedral Ground Set

In this paper, we study the set \(\mathcal{S}^\kappa = \{ (x,y)\in\mathcal{G}\times\mathbb{R}^n : y_j = x_j^\kappa , j=1,\dots,n\}\), where \(\kappa > 1\) and the ground set \(\mathcal{G}\) is a nonempty polytope contained in \( [0,1]^n\). This nonconvex set is closely related to separable standard quadratic programming and appears as a substructure in potential-based network flow problems … Read more

Optimization in Theory and Practice

Algorithms for continuous optimization problems have a rich history of design and innovation over the past several decades, in which mathematical analysis of their convergence and complexity properties plays a central role. Besides their theoretical properties, optimization algorithms are interesting also for their practical usefulness as computational tools for solving real-world problems. There are often … Read more

Moment-sos and spectral hierarchies for polynomial optimization on the sphere and quantum de Finetti theorems

We revisit the convergence analysis of two approximation hierarchies for polynomial optimization on the unit sphere. The first one is based on the moment-sos approach and gives semidefinite bounds for which Fang and Fawzi (2021) showed an analysis in \(O(1/r^2)\) for the r-th level bound, using the polynomial kernel method. The second hierarchy was recently … Read more

Bregman Douglas-Rachford Splitting Method

In this paper, we propose the Bregman Douglas-Rachford splitting (BDRS) method and its variant Bregman Peaceman-Rachford splitting method for solving maximal monotone inclusion problem. We show that BDRS is equivalent to a Bregman alternating direction method of multipliers (ADMM) when applied to the dual of the problem. A special case of the Bregman ADMM is … Read more

Gradient Methods with Online Scaling Part II. Practical Aspects

Part I of this work [Gao25] establishes online scaled gradient methods (OSGM), a framework that utilizes online convex optimization to adapt stepsizes in gradient methods. This paper focuses on the practical aspects of OSGM. We leverage the OSGM framework to design new adaptive first-order methods and provide insights into their empirical behavior. The resulting method, … Read more

On the convergence rate of the Douglas-Rachford splitting algorithm

This work is concerned with the convergence rate analysis of the Dou- glas–Rachford splitting (DRS) method for finding a zero of the sum of two maximally monotone operators. We obtain an exact rate of convergence for the DRS algorithm and demonstrate its sharpness in the setting of convex feasibility problems. Further- more, we investigate the … Read more

Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications

In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite \(\mathsf{p}\)-th central moment for some \(\mathsf{p}\in\left(1,2\right]\). Motivated by it, this work examines … Read more

On the Resolution of Ties in Fair Convex Allocation Problems

We study the emergence of indistinguishable, but structurally distinct, allocation outcomes in convex resource allocation models. Such outcomes occur when different users receive proportionally identical allocations despite differences in initial conditions, eligibility sets, or priority weights. We formalize this behavior and analyze the structural conditions under which it arises, with a focus on fairness-oriented objectives. … Read more

A Primal Approach to Facial Reduction for SDP Relaxations of Combinatorial Optimization Problems

We propose a novel facial reduction algorithm tailored to semidefinite programming relaxations of combinatorial optimization problems with quadratic objective functions. Our method leverages the specific structure of these relaxations, particularly the availability of feasible solutions that can often be generated efficiently in practice. By incorporating such solutions into the facial reduction process, we substantially simplify … Read more