Progressively Sampled Equality-Constrained Optimization

An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which … Read more

A Riemannian AdaGrad-Norm Method

We propose a manifold AdaGrad-Norm method (\textsc{MAdaGrad}), which extends the norm version of AdaGrad (AdaGrad-Norm) to Riemannian optimization. In contrast to line-search schemes, which may require several exponential map computations per iteration, \textsc{MAdaGrad} requires only one. Assuming the objective function $f$ has Lipschitz continuous Riemannian gradient, we show that the method requires at most $\mathcal{O}(\varepsilon^{-2})$ … Read more

On the Convergence and Properties of a Proximal-Gradient Method on Hadamard Manifolds

In this paper, we address composite optimization problems on Hadamard manifolds, where the objective function is given by the sum of a smooth term (not necessarily convex) and a convex term (not necessarily differentiable). To solve this problem, we develop a proximal gradient method defined directly on the manifold, employing a strategy that enforces monotonicity … Read more

Machine Learning Algorithms for Improving Black Box Optimization Solvers

Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods—line search, direct search, and model-based solvers such as Bayesian optimization—form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to … Read more

Sequential test sampling for stochastic derivative-free optimization

In many derivative-free optimization algorithms, a sufficient decrease condition decides whether to accept a trial step in each iteration. This condition typically requires that the potential objective function value decrease of the trial step, i.e., the true reduction in the objective function value that would be achieved by moving from the current point to the … Read more

New insights and algorithms for optimal diagonal preconditioning

Preconditioning (scaling) is essential in many areas of mathematics, and in particular in optimization. In this work, we study the problem of finding an optimal diagonal preconditioner. We focus on minimizing two different notions of condition number: the classical, worst-case type, \(\kappa\)-condition number, and the more averaging motivated \(\omega\)-condition number. We provide affine based pseudoconvex … Read more

A linearly convergent algorithm for variational inequalities based on fiber bundle

The variational inequality (VI) problem is a fundamental mathematical framework for many classical problems. This paper introduces an algorithm that applies to arbitrary finite-dimensional VIs with general compact convex sets and general continuous functions. The algorithm guarantees global linear convergence to an approximate solution without requiring any assumptions, including the typical monotonicity. Our approach adapts … Read more

MDP modeling for multi-stage stochastic programs

We study a class of multi-stage stochastic programs, which incorporate modeling features from Markov decision processes (MDPs). This class includes structured MDPs with continuous state and action spaces. We extend policy graphs to include decision-dependent uncertainty for one-step transition probabilities as well as a limited form of statistical learning. We focus on the expressiveness of … Read more

An extension of an interior-point method to include risk aversion in large-scale multistage stochastic optimization

In the earlier paper “On solving large-scale multistage stochastic optimization problems with a new specialized interior-point approach, European Journal of Operational Research, 310 (2023), 268–285” the authors presented a novel approach based on a specialized interior-point method (IPM) for solving (risk neutral) large-scale multistage stochastic optimization problems. The method computed the Newton direction by combining … Read more

Approximating value functions via corner Benders’ cuts

We introduce a novel technique to generate Benders’ cuts from a conic relaxation (“corner”) derived from a basis of a higher-dimensional polyhedron that we aim to outer approximate in a lower-dimensional space. To generate facet-defining inequalities for the epigraph associated to this corner, we develop a computationally-efficient algorithm based on a compact reverse polar formulation … Read more