A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling

In this paper we consider the optimization of a functional $F$ defined as the co nvolution of a function $f$ with a Gaussian kernel. We propose this type of objective function for the optimization of the output of complex computational simulations, which often present some form of deterministic noise and need to be smoothed for the results to be meaningful. We introduce a derivative-free algorithm that computes trial points from the minimization of a regression model of the noisy function $f$ over a trust region. The regression model is constructed from function values at sample points that are chosen randomly around iterates and trial points of the algorithm. The weights given to the individual sample points in the regression problem are obtained according to an adaptive multiple importance sampling strategy. This has two advantages. First, it makes it possible to re-use all noisy function values collected over the course of the optimization. Second, the resulting regression model converges to the second-order Taylor approximation of the convolution functional $F$. We prove that, with probability one, each limit point of the iterates is a stationary point of $F$. Computational experiments on a set of benchmark problems with noisy functions compare the proposed algorithm with the deterministic derivative-free trust-region method the proposed method is based on. It is demonstrated that the proposed algorithm performs similarly efficiently in the early stages of the optimization and is able to overcome convergence problems of the original method, which might get trapped in spurious local minima induced by the noise.

Citation

Technical Report IEMS Department Northwestern University

Article

Download

View A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling