Machine Learning Algorithms for Assisting Solvers for Constraint Satisfaction Problems

This survey proposes a unifying conceptual framework and taxonomy that systematically integrates Machine Learning (ML) and Reinforcement Learning (RL) with classical paradigms for Constraint Satisfaction and Boolean Satisfiability solving. Unlike prior reviews that focus on individual applications, we organize the literature around solver architecture, linking each major phase—constraint propagation, heuristic decision-making, conflict analysis, and meta-level … Read more

Machine Learning Algorithms for Assisting Solvers for Decision Optimization Problems

Combinatorial decision problems lie at the intersection of Operations Research (OR) and Artificial Intelligence (AI), encompassing structured optimization tasks such as submodular selection, dynamic programming, planning, and scheduling. These problems exhibit exponential growth in decision complexity, driven by interdependent choices coupled through logical, temporal, and resource constraints.  Classical optimization frameworks—including integer programming, submodular optimization, and … Read more

Machine Learning Algorithms for Improving Black Box Optimization Solvers

Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods—line search, direct search, and model-based solvers such as Bayesian optimization—form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to … Read more

Direct-search methods for decentralized blackbox optimization

Derivative-free optimization algorithms are particularly useful for tackling blackbox optimization problems where the objective function arises from complex and expensive procedures that preclude the use of classical gradient-based methods. In contemporary decentralized environments, such functions are defined locally on different computational nodes due to technical or privacy constraints, introducing additional challenges within the optimization process. … Read more

Restarting nonlinear conjugate gradient methods

In unconstrained optimization, due to the nonlinearity of the objective function or rounding errors in finite precision arithmetic, it can happen that NaN or infinite step sizes appear in the nonlinear conjugate gradient (NCG) method, or otherwise the step violates the sufficient descent condition (SDC). In this case the conjugate gradient (CG) direction must often … Read more

A class of diagonal quasi-Newton penalty decomposition algorithms for sparse bound-constrained nonconvex optimization

This paper discusses an improved quasi-Newton penalty decomposition algorithm for the cardinality bound-constrained optimization problems whose simple bounds on the variables are assumed to be finite. Until an approximate stationary point is found, this algorithm approximates the solutions of a sequence of penalty subproblems by a two-block decomposition scheme. This scheme finds an approximate solution … Read more

Probabilistic Iterative Hard Thresholding for Sparse Learning

For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to the sample size, finding hidden sparsity in the ground truth can be critical in formulating an accurate statistical model. The so-called “l0 norm”, which counts the number of non-zero components in a vector, is a strong reliable mechanism of enforcing … Read more

The stochastic Ravine accelerated gradient method with general extrapolation coefficients

Abstract: In a real Hilbert space domain setting, we study the convergence properties of the stochastic Ravine accelerated gradient method for convex differentiable optimization. We consider the general form of this algorithm where the extrapolation coefficients can vary with each iteration, and where the evaluation of the gradient is subject to random errors. This general … Read more

Inexact Direct-Search Methods for Bilevel Optimization Problems

In this work, we introduce new direct search schemes for the solution of bilevel optimization (BO) problems. Our methods rely on a fixed accuracy black box oracle for the lower-level problem, and deal both with smooth and potentially nonsmooth true objectives. We thus analyze for the first time in the literature direct search schemes in … Read more

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results. The algorithm is unique from other interior-point methods for solving smooth (nonconvex) optimization problems since the search directions are computed using stochastic gradient estimates. It is also unique … Read more