Progressively Sampled Equality-Constrained Optimization

An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which … Read more

Machine Learning Algorithms for Improving Black Box Optimization Solvers

Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods—line search, direct search, and model-based solvers such as Bayesian optimization—form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to … Read more

New insights and algorithms for optimal diagonal preconditioning

Preconditioning (scaling) is essential in many areas of mathematics, and in particular in optimization. In this work, we study the problem of finding an optimal diagonal preconditioner. We focus on minimizing two different notions of condition number: the classical, worst-case type, \(\kappa\)-condition number, and the more averaging motivated \(\omega\)-condition number. We provide affine based pseudoconvex … Read more

A Minimalist Bayesian Framework for Stochastic Optimization

The Bayesian paradigm offers principled tools for sequential decision-making under uncertainty, but its reliance on a probabilistic model for all parameters can hinder the incorporation of complex structural constraints. We introduce a minimalist Bayesian framework that places a prior only on the component of interest, such as the location of the optimum. Nuisance parameters are … Read more

Active-Set Identification in Noisy and Stochastic Optimization

Identifying active constraints from a point near an optimal solution is important both theoretically and practically in constrained continuous optimization, as it can help identify optimal Lagrange multipliers and essentially reduces an inequality-constrained problem to an equality-constrained one. Traditional active-set identification guarantees have been proved under assumptions of smoothness and constraint qualifications, and assume exact … Read more

A Data-Driven County-Level Budget Allocation Model for Opioid Crisis Management: Insights from West Virginia

Problem definition. The opioid crisis has remained a major public health challenge in the United States for many years. This study develops a data-driven decision support framework to guide policymakers in allocating county-level budgets across multiple expenditure categories in order to address the opioid crisis. Methodology/results. We compile and curate a detailed dataset on fiscal … Read more

Pareto-optimal trees and Pareto forest: a bi-objective optimization model for binary classification

As inherently transparent models, classification trees play a central role in interpretable machine learning by providing easily traceable decision paths that allow users to understand how input features contribute to specific predictions. In this work, we introduce a new class of interpretable binary classification models, named Pareto-optimal trees, which aim at combining the complementary strengths … Read more

On the Convergence and Complexity of Proximal Gradient and Accelerated Proximal Gradient Methods under Adaptive Gradient Estimation

In this paper, we propose a proximal gradient method and an accelerated proximal gradient method for solving composite optimization problems, where the objective function is the sum of a smooth and a convex, possibly nonsmooth, function. We consider settings where the smooth component is either a finite-sum function or an expectation of a stochastic function, … Read more

Faster stochastic cubic regularized Newton methods with momentum

Cubic regularized Newton (CRN) methods have attracted significant research interest because they offer stronger solution guarantees and lower iteration complexity. With the rise of the big-data era, there is growing interest in developing stochastic cubic regularized Newton (SCRN) methods that do not require exact gradient and Hessian evaluations. In this paper, we propose faster SCRN … Read more

Stochastic Approximation with Block Coordinate Optimal Stepsizes

We consider stochastic approximation with block-coordinate stepsizes and propose adaptive stepsize rules that aim to minimize the expected distance from the next iterate to an optimal point. These stepsize rules employ online estimates of the second moment of the search direction along each block coordinate. The popular Adam algorithm can be interpreted as a particular … Read more