An Optimal Interpolation Set for Model-Based Derivative-Free Optimization Methods

This paper demonstrates the optimality of an interpolation set employed in derivative-free trust-region methods. This set is optimal in the sense that it minimizes the constant of well-poisedness in a ball centred at the starting point. It is chosen as the default initial interpolation set by many derivative-free trust-region methods based on underdetermined quadratic interpolation, … Read more

A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a … Read more

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Recent work has shown that stochastically perturbed gradient methods can efficiently escape strict saddle points of smooth functions. We extend this body of work to nonsmooth optimization, by analyzing an inexact analogue of a stochastically perturbed gradient method applied to the Moreau envelope. The main conclusion is that a variety of algorithms for nonsmooth optimization … Read more