Using tail bounds, we introduce a new probabilistic condition for function estimation in stochastic derivative-free optimization which leads to a reduction in the number of samples and eases algorithmic analyses. Moreover, we develop simple stochastic direct-search and trust-region methods for the optimization of a potentially non-smooth function whose values can only be estimated via stochastic observations. For trial points to be accepted, these algorithms require the estimated function values to yield a sufficient decrease measured in terms of a power larger than 1 of the algoritmic stepsize.
Our new tail bound condition is precisely imposed on the reduction estimate used to achieve such a sufficient decrease. This condition allows us to select the stepsize power used for sufficient decrease in such a way to reduce the number of samples needed per iteration. In previous works, the number of samples necessary for global convergence at every iteration k of this type of algorithms was O(Delta_k^{-4}), where Delta_k is the stepsize or trust region radius. However, using the new tail bound condition, and under mild assumptions on the noise, one can prove that such a number of samples is only O(Delta_k^{-2 – eps}), where eps > 0 can be made arbitrarily small by selecting the power of the stepsize in the sufficient decrease test arbitrarily close to 1. The global convergence properties of the stochastic direct-search and trust-region algorithms are established under the new tail bound condition.
Citation
F. Rinaldi, L. N. Vicente, and D. Zeffiro, A weak tail-bound probabilistic condition for function estimation in stochastic derivative-free optimization, ISE Technical Report 22T-002, Lehigh University.