A quasisecant method for minimizing nonsmooth functions

In this paper a new algorithm to locally minimize nonsmooth, nonconvex functions is developed. We introduce the notion of secants and quasisecants for nonsmooth functions. The quasisecants are applied to find descent directions of locally Lipschitz functions. We design a minimization algorithm which uses quasisecants to find descent directions. We prove that this algorithm converges … Read more

A practical method for solving large-scale TRS

We present a nearly-exact method for the large scale trust region subproblem (TRS) based on the properties of the minimal-memory BFGS method. Our study in concentrated in the case where the initial BFGS matrix can be any scaled identity matrix. The proposed method is a variant of the Mor\'{e}-Sorensen method that exploits the eigenstructure of … Read more

Interior-point method for nonlinear programming with complementarity constraints

In this report, we propose an algorithm for solving nonlinear programming problems with com-plementarity constraints, which is based on the interior-point approach. Main theoretical results concern direction determination and step-length selection. We use an exact penalty function to remove complementarity constraints. Thus a new indefinite linear system is defined with a tridiagonal low-right submatrix. Inexact … Read more

Transformations enabling to construct limited-memory Broyden class methods

The Broyden class of quasi-Newton updates for inverse Hessian approximation are transformed to the formal BFGS update, which makes possible to generalize the well-known Nocedal method based on the Strang recurrences to the scaled limited-memory Broyden family, using the same number of stored vectors as for the limited-memory BFGS method. Two variants are given, the … Read more

Limited-memory projective variable metric methods for unconstrained minimization

A new family of limited-memory variable metric or quasi-Newton methods for unconstrained minimization is given. The methods are based on a positive definite inverse Hessian approximation in the form of the sum of identity matrix and two low rank matrices, obtained by the standard scaled Broyden class update. To reduce the rank of matrices, various … Read more

Computational experience with modified conjugate gradient methods for unconstrained optimization

In this report, several modifications of the nonlinear conjugate gradient method are described and investigated. Theoretical properties of these modifications are proved and their practical performance is demonstrated using extensive numerical experiments. CitationTechnical report No. 1038, Institute of Computer Science, Pod Vodarenskou Vezi 2, 18207 Praha 8. December 2008ArticleDownload View PDF

An adaptive cubic regularisation algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in … Read more

On solving trust-region and other regularised subproblems in optimization

The solution of trust-region and regularisation subproblems which arise in unconstrained optimization is considered. Building on the pioneering work of Gay, More’ and Sorensen, methods which obtain the solution of a sequence of parametrized linear systems by factorization are used. Enhancements using high-order polynomial approximation and inverse iteration ensure that the resulting method is both … Read more

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

We propose a numerical algorithm for solving smooth nonlinear programming problems with a large number of constraints, but a moderate number of variables. The active set method proceeds from a given bound mw for the maximum number of expected violated constraints, where mw is a user-provided parameter less than the total number of constraints. A … Read more

The Constant Rank Condition and Second Order Constraint Qualifications

The Constant Rank condition for feasible points of nonlinear programming problems was defined by Janin in Ref. 1. In that paper the author proved that the condition was a first order constraint qualification. In this work we prove that the Janin Constant Rank condition is, in addition, a second order constraint qualification. We also define … Read more