We present an algorithm for the minimization of f : Rn β R, assumed to be locally Lipschitz and continuously differentiable in an open dense subset D of Rn. The objective f may be non-smooth and/or non-convex. The method is based on the gradient sampling (GS) algorithm of Burke et al. [A robust gradient sampling algorithm for nonsmooth, nonconvex optimization, SIAM J. Optim. 15 (2005), pp. 751β779]. It differs, however, from previously proposed versions of GS in that it is variable-metric and only O(1) (not O(n)) gradient evaluations are required per iteration. Numerical experiments illustrate that the algorithm is more efficient than GS in that it consistently makes more progress towards a solution within a given number of gradient evaluations. In addition, the adaptive sampling procedure allows for warm-starting of the quadratic subproblem solver so that the average number of subproblem iterations per nonlinear iteration is also consistently reduced. Global convergence of the algorithm is proved assuming that the Hessian approximations are positive definite and bounded, an assumption shown to be true for the proposed Hessian approximation updating strategies.
Citation
F. E. Curtis and X. Que, βAn Adaptive Gradient Sampling Algorithm for Non-smooth Optimization,β Optimization Methods and Software, 28(6): 1302-1324, 2013.