Full-low evaluation methods for bound and linearly constrained derivative-free optimization

Derivative-free optimization (DFO) consists in finding the best value of an objective function without relying on derivatives. To tackle such problems, one may build approximate derivatives, using for instance finite-difference estimates. One may also design algorithmic strategies that perform space exploration and seek improvement over the current point. The first type of strategy often provides good performance on smooth problems but at the
expense of more function evaluations. The second type is cheaper and typically handles non-smoothness or noise in the objective better. Recently, full-low evaluation methods have been proposed as a hybrid class of DFO algorithms that combine both strategies, respectively denoted as Full-Eval and Low-Eval. In the unconstrained case, these methods showed promising numerical performance.

In this paper, we extend the full-low evaluation framework to bound and linearly constrained derivative-free optimization. We derive convergence results for an instance of this framework, that combines finite-difference quasi-Newton steps with probabilistic direct search steps. The former are projected onto the feasible set, while the latter are defined within tangent cones identified by nearby active constraints. We illustrate the practical performance of our instance on standard linearly constrained problems, that we adapt to introduce noisy evaluations as well as non-smoothness. In all cases, our method performs favorably compared to algorithms that rely solely on Full-eval or Low-eval iterations.

Citation

C. W. Royer, O. Sohab, and L. N. Vicente, Full-low evaluation methods for bound and linearly constrained derivative-free optimization, ISE Technical Report 23T-025, Lehigh University

Article

Download

View Full-low evaluation methods for bound and linearly constrained derivative-free optimization