A shifted Steihaug-Toint method for computing a trust-region step.

Trust-region methods are very convenient in connection with the Newton method for unconstrained optimization. The More-Sorensen direct method and the Steihaug-Toint iterative method are most commonly used for solving trust-region subproblems. We propose a method which combines both of these approaches. Using the small-size Lanczos matrix, we apply the More-Sorensen method to a small-size trust-region … Read more

A modified nearly exact method for solving low-rank trust region subproblem

In this paper we present a modified nearly exact (MNE) method for solving low-rank trust region (LRTR) subproblem. The LRTR subproblem is to minimize a quadratic function, whose Hessian is a positive diagonal matrix plus explicit low-rank update, subject to a Dikin-type ellipsoidal constraint, whose scaling matrix is positive definite and has the similar structure … Read more

A limited memory algorithm for inequality constrained minimization

A method for solving inequality constrained minimization problems is described. The algorithm is based on a primal-dual interior point approach, with a line search globalization strategy. A quasi-Newton technique (BFGS) with limited memory storage is used to approximate the second derivatives of the functions. The method is especially intended for solving problems with a large … Read more

Gradient Projection Methods for Quadratic Programs and Applications in Training Support Vector Machines

Gradient projection methods based on the Barzilai-Borwein spectral steplength choices are considered for quadratic programming problems with simple constraints. Well known nonmonotone spectral projected gradient methods and variable projection methods are discussed. For both approaches the behavior of different combinations of the two spectral steplengths is investigated. A nw adaptive stplength alternating rule is proposed, … Read more

SIAG/Opt Views-and-News Vol 14 No 1

SIAM’s SIAG/Opt Newsletter special issue on Large Scale Nonconvex Optimization. Guest editors Sven Leyffer and Jorge Nocedal, with contributions by Gould, Sachs, Biegler, Waechter, Leyffer, Bussieck and Pruessner. Citation SIAG/Opt Views-and-News, Volume 14 Number 1, April 2003. http://fewcal.uvt.nl/sturm/siagopt/ Article Download View SIAG/Opt Views-and-News Vol 14 No 1

A globally convergent linearly constrained Lagrangian method for nonlinear optimization

For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints”. Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. The well known software package MINOS has proven effective on many … Read more

Semidefinite optimization, a spectral approach

This thesis is about mathematical optimization. Mathematical optimization involves the construction of methods to solve optimization problems, which can arise from real-life problems in applied science, when they are mathematically modeled. Examples come from electrical design, engineering, control theory, telecommunication, environment, finance, and logistics. This thesis deals especially with semidefinite optimization problems. Semidefinite programming is … Read more

Interior-Point Methods for Nonconvex Nonlinear Programming: Complementarity Constraints

In this paper, we present the formulation and solution of optimization problems with complementarity constraints using an interior-point method for nonconvex nonlinear programming. We identify possible difficulties that could arise, such as unbounded faces of dual variables, linear dependence of constraint gradients and initialization issues. We suggest remedies. We include encouraging numerical results on the … Read more

A Comparative Study of Large-Scale Nonlinear Optimization Algorithms

In recent years, much work has been done on implementing a variety of algorithms in nonlinear programming software. In this paper, we analyze the performance of several state-of-the-art optimization codes on large-scale nonlinear optimization problems. Extensive numerical results are presented on different classes of problems, and features of each code that make it efficient or … Read more

Automatic Differentiation Tools in Optimization Software

We discuss the role of automatic differentiation tools in optimization software. We emphasize issues that are important to large-scale optimization and that have proved useful in the installation of nonlinear solvers in the NEOS Server. Our discussion centers on the computation of the gradient and Hessian matrix for partially separable functions and shows that the … Read more