Machine Learning Algorithms for Assisting Solvers for Constraint Satisfaction Problems

This survey proposes a unifying conceptual framework and taxonomy that systematically integrates Machine Learning (ML) and Reinforcement Learning (RL) with classical paradigms for Constraint Satisfaction and Boolean Satisfiability solving. Unlike prior reviews that focus on individual applications, we organize the literature around solver architecture, linking each major phase—constraint propagation, heuristic decision-making, conflict analysis, and meta-level … Read more

Machine Learning Algorithms for Assisting Solvers for Decision Optimization Problems

Combinatorial decision problems lie at the intersection of Operations Research (OR) and Artificial Intelligence (AI), encompassing structured optimization tasks such as submodular selection, dynamic programming, planning, and scheduling. These problems exhibit exponential growth in decision complexity, driven by interdependent choices coupled through logical, temporal, and resource constraints.  Classical optimization frameworks—including integer programming, submodular optimization, and … Read more

Machine Learning Algorithms for Improving Black Box Optimization Solvers

Black-box optimization (BBO) addresses problems where objectives are accessible only through costly queries without gradients or explicit structure. Classical derivative-free methods—line search, direct search, and model-based solvers such as Bayesian optimization—form the backbone of BBO, yet often struggle in high-dimensional, noisy, or mixed-integer settings. Recent advances use machine learning (ML) and reinforcement learning (RL) to … Read more

Restarting nonlinear conjugate gradient methods

In unconstrained optimization, due to the nonlinearity of the objective function or rounding errors in finite precision arithmetic, it can happen that NaN or infinite step sizes appear in the nonlinear conjugate gradient (NCG) method, or otherwise the step violates the sufficient descent condition (SDC). In this case the conjugate gradient (CG) direction must often … Read more

A class of diagonal quasi-Newton penalty decomposition algorithms for sparse bound-constrained nonconvex optimization

This paper discusses an improved quasi-Newton penalty decomposition algorithm for the cardinality bound-constrained optimization problems whose simple bounds on the variables are assumed to be finite. Until an approximate stationary point is found, this algorithm approximates the solutions of a sequence of penalty subproblems by a two-block decomposition scheme. This scheme finds an approximate solution … Read more

Heuristic methods for noisy derivative-free bound-constrained mixed-integer optimization

This paper introduces MATRS, a novel matrix adaptation trust-region strategy designed to solve noisy derivative-free mixed-integer optimization problems with simple bounds in low dimensions. MATRS operates through a repeated cycle of five phases: mutation, selection, recombination, trust-region, and mixed-integer, executed in this sequence. But if in the mutation phase a new best point (the point … Read more

An active set method for bound-constrained optimization

In this paper, a class of algorithms is developed for bound-constrained optimization. The new scheme uses the gradient-free line search along bent search paths. Unlike traditional algorithms for bound-constrained optimization, our algorithm ensures that the reduced gradient becomes arbitrarily small. It is also proved that all strongly active variables are found and fixed after finitely … Read more

A subspace inertial method for derivative-free nonlinear monotone equations

We introduce SILSA, a subspace inertial line search algorithm, for finding solutions of nonlinear monotone equations (NME). At each iteration, a new point is generated in a subspace generated by the previous points. Of all finite points forming the subspace, a point with the largest residual norm is replaced by the new point to update … Read more

Effective matrix adaptation strategy for noisy derivative-free optimization

In this paper, we introduce a new effective matrix adaptation evolution strategy (MADFO) for noisy derivative-free optimization problems. Like every MAES solver, MADFO consists of three phases: mutation, selection and recombination. MADFO improves the mutation phase by generating good step sizes, neither too small nor too large, that increase the probability of selecting mutation points … Read more

New subspace method for unconstrained derivative-free optimization

This paper defines an efficient subspace method, called SSDFO, for unconstrained derivative-free optimization problems where the gradients of the objective function are Lipschitz continuous but only exact function values are available. SSDFO employs line searches along directions constructed on the basis of quadratic models. These approximate the objective function in a subspace spanned by some … Read more