New insights and algorithms for optimal diagonal preconditioning

Preconditioning (scaling) is essential in many areas of mathematics, and in particular in optimization. In this work, we study the problem of finding an optimal diagonal preconditioner. We focus on minimizing two different notions of condition number: the classical, worst-case type, \(\kappa\)-condition number, and the more averaging motivated \(\omega\)-condition number. We provide affine based pseudoconvex … Read more

Approximating inequality systems within probability functions: studying implications for problems and consistency of first-order information

In this work, we are concerned with the study of optimization problems featuring so-called probability or chance constraints. Probability constraints measure the level of satisfaction of an underlying random inequality system and ensure that this level is high enough. Such an underlying inequality system could be expressed by an abstractly known or perhaps costly to … Read more

On the convergence rate of the Douglas-Rachford splitting algorithm

This work is concerned with the convergence rate analysis of the Dou- glas–Rachford splitting (DRS) method for finding a zero of the sum of two maximally monotone operators. We obtain an exact rate of convergence for the DRS algorithm and demonstrate its sharpness in the setting of convex feasibility problems. Further- more, we investigate the … Read more

Alternating Iteratively Reweighted \(\ell_1\) and Subspace Newton Algorithms for Nonconvex Sparse Optimization

This paper presents a novel hybrid algorithm for minimizing the sum of a continuously differentiable loss function and a nonsmooth, possibly nonconvex, sparsity‑promoting regularizer. The proposed method adaptively switches between solving a reweighted \(\ell_1\)-regularized subproblem and performing an inexact subspace Newton step. The reweighted \(\ell_1\)-subproblem admits an efficient closed-form solution via the soft-thresholding operator, thereby … Read more

Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications

In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite \(\mathsf{p}\)-th central moment for some \(\mathsf{p}\in\left(1,2\right]\). Motivated by it, this work examines … Read more

Second order directional derivative of optimal solution function in parametric programming problem

In this paper, the second-order directional derivative of the optimal value function and the optimal solution function are obtained for a strongly stable parametric problem with non-unique Lagrange multipliers. Some properties of the Lagrange multipliers are proved. It is justified that the second-order directional derivative of the optimal solution function for the parametric problem can … Read more

A strongly convergent projection and contraction algorithm with extrapolations from the past

This paper introduces a projection and contraction-type algorithm that features an extrapolation from the past, reducing the two values of the cost operator inherent in the original projection and contraction algorithm to a single value at the current iteration. Strong convergence results of the proposed algorithm are proved in Hilbert spaces. Experimental results on testing … Read more

Swapping objectives accelerates Davis-Yin splitting

In this work, we investigate the application of Davis–Yin splitting (DYS) to convex optimization problems and demonstrate that swapping the roles of the two nonsmooth convex functions can result in a faster convergence rate. Such a swap typically yields a different sequence of iterates, but its impact on convergence behavior has been largely understudied or … Read more

Preconditioning for rational approximation

In this paper, we show that minimax rational approximations can be enhanced by introducing a controlling parameter on the denominator of the rational function. This is implemented by adding a small set of linear constraints to the underlying optimization problem. The modification integrates naturally into approximation models formulated as linear programming problems. We demonstrate our … Read more

A Variational Analysis Approach for Bilevel Hyperparameter Optimization with Sparse Regularization

We study a bilevel optimization framework for hyperparameter learning in variational models, with a focus on sparse regression and classification tasks. In particular, we consider a weighted elastic-net regularizer, where feature-wise regularization parameters are learned through a bilevel formulation. A key novelty of our approach is the use of a Forward-Backward (FB) reformulation of the … Read more