A view of algorithms for optimization without derivatives

Let the least value of a function of many variables be required. If its gradient is available, then one can tell whether search directions are downhill, and first order conditions help to identify the solution. It seems in practice, however, that the vast majority of unconstrained calculations do not employ any derivatives. A view of this situation is given, attention being restricted to methods that are suitable for noisy functions, and that change the variables in ways that are not random. Particular topics include the McKinnon (1998) example of the failure of the Nelder and Mead (1965) simplex method, some convergence properties of pattern search algorithms, and my own recent research on using quadratic models of the objective function. We find that least values of functions of more than 100 variables can be calculated.

Citation

Report DAMTP 2007/NA03. CMS, University of Cambridge, Cambridge CB3 0WA, UK. April, 2007.

Article

Download

View A view of algorithms for optimization without derivatives