A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy the algorithm ensures that convergence will continue to such a point. Under suitable assumptions, it is proved that the algorithm converges globally with probability one. The algorithm has been implemented in C++ and the results of numerical experiments illustrate the efficacy of the proposed approach.
Citation
F. E. Curtis and X. Que. A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees. Mathematical Programming Computation, 7(4):399-428, 2015.