Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We employ modified versions of a norm test and an inner product quasi-Newton test … Read more

Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates

We present a stochastic extension of the mesh adaptive direct search (MADS) algorithm originally developed for deterministic blackbox optimization. The algorithm, called StoMADS, considers the unconstrained optimization of an objective function f whose values can be computed only through a blackbox corrupted by some random noise following an unknown distribution. The proposed method is based … Read more

On the use of polynomial models in multiobjective directional direct search

Polynomial interpolation or regression models are an important tool in Derivative-free Optimization, acting as surrogates of the real function. In this work, we propose the use of these models in the multiobjective framework of directional direct search, namely the one of Direct Multisearch. Previously evaluated points are used to build quadratic polynomial models, which are … Read more

HyperNOMAD: Hyperparameter optimization of deep neural networks using mesh adaptive direct search

The performance of deep neural networks is highly sensitive to the choice of the hyperparameters that define the structure of the network and the learning process. When facing a new application, tuning a deep neural network is a tedious and time consuming process that is often described as a “dark art”. This explains the necessity … Read more

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization

In this paper, we analyze several methods for approximating gradients of noisy functions using only function values. These methods include finite differences, linear interpolation, Gaussian smoothing and smoothing on a unit sphere. The methods differ in the number of functions sampled, the choice of the sample points, and the way in which the gradient approximations … Read more

Trust-region methods for the derivative-free optimization of nonsmooth black-box functions

In this paper we study the minimization of a nonsmooth black-box type function, without assuming any access to derivatives or generalized derivatives and without any knowledge about the analytical origin of the function nonsmoothness. Directional methods have been derived for such problems but to our knowledge no model-based method like a trust-region one has yet … Read more

A Method for Convex Black-Box Integer Global Optimization

We study the problem of minimizing a convex function on the integer lattice when the function cannot be evaluated at noninteger points. We propose a new underestimator that does not require access to (sub)gradients of the objective but, rather, uses secant linear functions that interpolate the objective function at previously evaluated points. These linear mappings … Read more

Escaping local minima with derivative-free methods: a numerical investigation

We apply a state-of-the-art, local derivative-free solver, Py-BOBYQA, to global optimization problems, and propose an algorithmic improvement that is beneficial in this context. Our numerical findings are illustrated on a commonly-used test set of global optimization problems and associated noisy variants, and on hyperparameter tuning for a machine learning test set. As Py-BOBYQA is a … Read more

Efficient global unconstrained black box optimization

For the unconstrained optimization of black box functions, this paper introduces a new randomized algorithm called VRBBO. In practice, VRBBO matches the quality of other state-of-the-art algorithms for finding, in small and large dimensions, a local minimizer with reasonable accuracy. Although our theory guarantees only local minimizers our heuristic techniques turn VRBBO into an efficient … Read more

A Merit Function Approach for Evolution Strategies

In this paper, we extend a class of globally convergent evolution strategies to handle general constrained optimization problems. The proposed framework handles relaxable constraints using a merit function approach combined with a specific restoration procedure. The unrelaxable constraints in our framework, when present, are treated either by using the extreme barrier function or through a … Read more