We consider the trust region subproblem which is given by a minimization of a quadratic, not necessarily convex, function over the Euclidean ball. Based on the well-known second-order necessary and sufficient optimality conditions for this problem, we present two sufficient optimality conditions defined solely in terms of the primal variables. Each of these conditions corresponds to one of two possible scenarios that occur in this problem, commonly referred to in the literature as the presence or absence of the ``hard case". We consider a family of first-order methods, which includes the projected and conditional gradient methods. We show that any method belonging to this family produces a sequence which is guaranteed to converge to a stationary point of the trust region subproblem. Based on this result and the established sufficient optimality conditions, we show that convergence to an optimal solution can be also guaranteed as long as the method is properly initialized. In particular, if the method is initialized with the zeros vector and reinitialized with a randomly generated feasible point, then the best of the two obtained vectors is an optimal solution of the problem in probability 1.
Article
View Globally Solving the Trust Region Subproblem Using Simple First-Order Methods