Reduced Sample Complexity in Scenario-Based Control System Design via Constraint Scaling

The scenario approach is widely used in robust control system design and chance-constrained optimization, maintaining convexity without requiring assumptions about the probability distribution of uncertain parameters. However, the approach can demand large sample sizes, making it intractable for safety-critical applications that require very low levels of constraint violation. To address this challenge, we propose a … Read more

The Proximal Bundle Algorithm Under a Frank-Wolfe Perspective: an Improved Complexity Analysis

The proximal bundle algorithm (PBA) is a fundamental and computationally effective algorithm for solving optimization problems with nonsmooth components. We investigate its convergence rate, focusing on composite settings where one function is smooth and the other is piecewise linear. We interpret a sequence of null steps of the PBA as a Frank-Wolfe algorithm on the … Read more

Jordan and isometric cone automorphisms in Euclidean Jordan algebras

Every symmetric cone K arises as the cone of squares in a Euclidean Jordan algebra V. As V is a real inner-product space, we may denote by Isom(V) its group of isometries. The groups JAut(V) of its Jordan-algebra automorphisms and Aut(K) of the linear cone automorphisms are then related. For certain inner products, JAut(V) = … Read more

Some new accelerated and stochastic gradient descent algorithms based on locally Lipschitz gradient constants

In this paper, we revisit the recent stepsize applied for the gradient descent scheme which is called NGD proposed by [Hoai et al., A novel stepsize for gradient descent method, Operations Research Letters (2024) 53, doi: 10.1016/j.orl.2024.107072]. We first investigate NGD stepsize with two well-known accelerated techniques which are Heavy ball and Nesterov’s methods. In … Read more

Gradient Methods with Online Scaling

We introduce a framework to accelerate the convergence of gradient-based methods with online learning. The framework learns to scale the gradient at each iteration through an online learning algorithm and provably accelerates gradient-based methods asymptotically. In contrast with previous literature, where convergence is established based on worst-case analysis, our framework provides a strong convergence guarantee … Read more

Convergence of subgradient extragradient methods with novel stepsizes for equilibrium problems in Hilbert spaces

In this paper, by combining the inertial technique and subgradient extragradient method with a new strategy of stepsize selection, we propose a novel extragradient method to solve pseudomonotone equilibrium problems in real Hilbert spaces. Our method is designed such that the stepsize sequence is increasing after a finite number of iterations. This distinguishes our method … Read more

Parameter-free proximal bundle methods with adaptive stepsizes for hybrid convex composite optimization problems

This paper develops a parameter-free adaptive proximal bundle method with two important features: 1) adaptive choice of variable prox stepsizes that “closely fits” the instance under consideration; and 2) adaptive criterion for making the occurrence of serious steps easier. Computational experiments show that our method performs substantially fewer consecutive null steps (i.e., a shorter cycle) … Read more

An inexact ADMM for separable nonconvex and nonsmooth optimization

An Inexact Alternating Direction Method of Multiplies (I-ADMM) with an expansion linesearch step was developed for solving a family of separable minimization problems subject to linear constraints, where the objective function is the sum of a smooth but possibly nonconvex function and a possibly nonsmooth nonconvex function. Global convergence and linear convergence rate of the … Read more

Global non-asymptotic super-linear convergence rates of regularized proximal quasi-Newton methods on non-smooth composite problems

In this paper, we propose two regularized proximal quasi-Newton methods with symmetric rank-1 update of the metric (SR1 quasi-Newton) to solve non-smooth convex additive composite problems. Both algorithms avoid using line search or other trust region strategies. For each of them, we prove a super-linear convergence rate that is independent of the initialization of the … Read more

Performance Estimation for Smooth and Strongly Convex Sets

We extend recent computer-assisted design and analysis techniques for first-order optimization over structured functions–known as performance estimation–to apply to structured sets. We prove “interpolation theorems” for smooth and strongly convex sets with Slater points and bounded diameter, showing a wide range of extremal questions amount to structured mathematical programs. Prior function interpolation theorems are recovered … Read more