We propose a derivative-free trust-region method based on finite-difference gradient approximations for smooth optimization problems with convex constraints. The proposed method does not require computing an approximate stationarity measure. For nonconvex problems, we establish a worst-case complexity bound of \(\mathcal{O}\!\left(n\left(\frac{L}{\sigma}\epsilon\right)^{-2}\right)\) function evaluations for the method to reach an \(\left(\frac{L}{\sigma}\epsilon\right)\)-approximate stationary point, where \(n\) is the number of variables, \(L\) is the Lipschitz constant of the gradient, and \(\sigma\) is a user-defined estimate of \(L\). If the objective function is convex, the complexity to reduce the functional residual below \((L/\sigma)\epsilon\) is shown to be of \(\mathcal{O}\!\left(n\left(\frac{L}{\sigma}\epsilon\right)^{-1}\right)\) function evaluations, while for Polyak-Lojasiewicz functions on unconstrained domains, the bound further improves to \(\mathcal{O}\left(n\log\left(\left(\frac{L}{\sigma}\epsilon\right)^{-1}\right)\right)\). Numerical experiments on benchmark problems and a model-fitting application demonstrate the method’s efficiency relative to state-of-the-art derivative-free solvers for both unconstrained and bound-constrained problems.
A Finite-Difference Trust-Region Method for Convexly Constrained Smooth Optimization
\(\)