A Parametric Approach for Solving Convex Quadratic Optimization with Indicators Over Trees

This paper investigates convex quadratic optimization problems involving $n$ indicator variables, each associated with a continuous variable, particularly focusing on scenarios where the matrix $Q$ defining the quadratic term is positive definite and its sparsity pattern corresponds to the adjacency matrix of a tree graph. We introduce a graph-based dynamic programming algorithm that solves this … Read more

A Proximal-Gradient Method for Constrained Optimization

We present a new algorithm for solving optimization problems with objective functions that are the sum of a smooth function and a (potentially) nonsmooth regularization function, and nonlinear equality constraints. The algorithm may be viewed as an extension of the well-known proximal-gradient method that is applicable when constraints are not present. To account for nonlinear … Read more

A Bilevel Hierarchy of Strengthened Complex Moment Relaxations for Complex Polynomial Optimization

This paper proposes a bilevel hierarchy of strengthened complex moment relaxations for complex polynomial optimization. The key trick entails considering a class of positive semidefinite conditions that arise naturally in characterizing the normality of the so-called shift operators. The relaxation problem in this new hierarchy is parameterized by the usual relaxation order as well as … Read more

Revisiting the fitting of the Nelson-Siegel and Svensson models

The Nelson-Siegel and the Svensson models are two of the most widely used models for the term structure of interest rates. Even though the models are quite simple and intuitive, fitting them to market data is numerically challenging and various difficulties have been reported. In this paper, a novel mathematical analysis of the fitting problem … Read more

Slow convergence of the moment-SOS hierarchy for an elementary polynomial optimization problem

We describe a parametric univariate quadratic optimization problem for which the moment-SOS hierarchy has finite but increasingly slow convergence when the parameter tends to its limit value. We estimate the order of finite convergence as a function of the parameter. ArticleDownload View PDF

Stochastic Aspects of Dynamical Low-Rank Approximation in the Context of Machine Learning

The central challenges of today’s neural network architectures are the prohibitive memory footprint and training costs associated with determining optimal weights and biases. A large portion of research in machine learning is therefore dedicated to constructing memory-efficient training methods. One promising approach is dynamical low-rank training (DLRT), which represents and trains parameters as a low-rank … Read more

Model Construction for Convex-Constrained Derivative-Free Optimization

We develop a new approximation theory for linear and quadratic interpolation models, suitable for use in convex-constrained derivative-free optimization (DFO). Most existing model-based DFO methods for constrained problems assume the ability to construct sufficiently accurate approximations via interpolation, but the standard notions of accuracy (designed for unconstrained problems) may not be achievable by only sampling … Read more

A Unified Approach for Maximizing Continuous $\gamma$-weakly DR-submodular Functions

This paper presents a unified approach for maximizing continuous \(\gamma\)-weakly DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the convex feasible region. We consider settings where the oracle provides access to either the … Read more

The stochastic Ravine accelerated gradient method with general extrapolation coefficients

Abstract: In a real Hilbert space domain setting, we study the convergence properties of the stochastic Ravine accelerated gradient method for convex differentiable optimization. We consider the general form of this algorithm where the extrapolation coefficients can vary with each iteration, and where the evaluation of the gradient is subject to random errors. This general … Read more

A Jacobi-type Newton method for Nash equilibrium problems with descent guarantees

A common strategy for solving an unconstrained two-player Nash equilibrium problem with continuous variables is applying Newton’s method to the system obtained by the corresponding first-order necessary optimality conditions. However, when taking into account the game dynamics, it is not clear what is the goal of each player when considering they are taking their current … Read more