Convergence of trust-region methods based on probabilistic models

In this paper we consider the use of probabilistic or random models within a classical trust-region framework for optimization of deterministic smooth general nonlinear functions. Our method and setting differs from many stochastic optimization approaches in two principal ways. Firstly, we assume that the value of the function itself can be computed without noise, in … Read more

The Trust Region Subproblem with Non-Intersecting Linear Constraints

This paper studies an extended trust region subproblem (eTRS)in which the trust region intersects the unit ball with m linear inequality constraints. When m=0, m=1, or m=2 and the linear constraints are parallel, it is known that the eTRS optimal value equals the optimal value of a particular convex relaxation, which is solvable in polynomial … Read more

MSS: MATLAB software for L-BFGS trust-region subproblems for large-scale optimization

A MATLAB implementation of the More’-Sorensen sequential (MSS) method is presented. The MSS method computes the minimizer of a quadratic function defined by a limited-memory BFGS matrix subject to a two-norm trust-region constraint. This solver is an adaptation of the More’-Sorensen direct method into an L-BFGS setting for large-scale optimization. The MSS method makes use … Read more

Adaptive Regularized Self-Consistent Field Iteration with Exact Hessian for Electronic Structure Calculation

The self-consistent field (SCF) iteration has been used ubiquitously for solving the Kohn-Sham (KS) equation or the minimization of the KS total energy functional with respect to orthogonality constraints in electronic structure calculations. Although SCF with heuristics such as charge mixing often works remarkably well on many problems, it is well known that its convergence … Read more

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

We consider the minimization of a convex function on a compact polyhedron defined by linear equality constraints and nonnegative variables. We define the Levenberg-Marquardt (L-M) and central trajectories starting at the analytic center and using the same parameter, and show that they satisfy a primal-dual relationship, being close to each other for large values of … Read more

Scalable Nonlinear Programming Via Exact Differentiable Penalty Functions and Trust-Region Newton Methods

We present an approach for nonlinear programming (NLP) based on the direct minimization of an exact di erentiable penalty function using trust-region Newton techniques. As opposed to existing algorithmic approaches to NLP, the approach provides all the features required for scalability: it can eciently detect and exploit directions of negative curvature, it is superlinearly convergent, and … Read more

On optimizing the sum of the Rayleigh quotient and the generalized Rayleigh quotient on the unit sphere

Given symmetric matrices $B,D\in R^{n\times n}$ and a symmetric positive definite matrix $W\in R^{n\times n},$ maximizing the sum of the Rayleigh quotient $x^T Dx$ and the generalized Rayleigh quotient $x^T Bx/x^TWx$ on the unit sphere not only is of mathematical interest in its own right, but also finds applications in practice. In this paper, we … Read more

Global Convergence of Radial Basis Function Trust Region Derivative-Free Algorithms

We analyze globally convergent derivative-free trust region algorithms relying on radial basis function interpolation models. Our results extend the recent work of Conn, Scheinberg, and Vicente to fully linear models that have a nonlinear term. We characterize the types of radial basis functions that fit in our analysis and thus show global convergence to first-order … Read more

A surrogate management framework using rigorous trust-regions steps

Surrogate models and heuristics are frequently used in the optimization engineering community as convenient approaches to deal with functions for which evaluations are expensive or noisy, or lack convexity. These methodologies do not typically guarantee any type of convergence under reasonable assumptions and frequently render slow convergence. In this paper we will show how to … Read more

A Note on the Implementation of an Interior-Point Algorithm for Nonlinear Optimization with Inexact Step Computations

This paper describes an implementation of an interior-point algorithm for large-scale nonlinear optimization. It is based on the algorithm proposed by Curtis et al. (SIAM J Sci Comput 32:3447–3475, 2010), a method that possesses global convergence guarantees to first-order stationary points with the novel feature that inexact search direction calculations are allowed in order to … Read more