Using Partial Separability of Functions in Generating Set Search Methods for Unconstrained Optimisation

Generating set Search Methods (GSS), a class of derivative-free methods for unconstrained optimisation, are in general robust but converge slowly. It has been shown that the performance of these methods can be enhanced by utilising accumulated information about the objective function as well as a priori knowledge such as partial separability. This paper introduces a … Read more

Second-order convergence properties of trust-region methods using incomplete curvature information, with an application to multigrid optimization

Convergence properties of trust-region methods for unconstrained nonconvex optimization is considered in the case where information on the objective function’s local curvature is incomplete, in the sense that it may be restricted to a fixed set of “test directions” and may not be available at every iteration. It is shown that convergence to local “weak” … Read more

On the behavior of the conjugate-gradient method on ill-conditioned problems

We study the behavior of the conjugate-gradient method for solving a set of linear equations, where the matrix is symmetric and positive definite with one set of eigenvalues that are large and the remaining are small. We characterize the behavior of the residuals associated with the large eigenvalues throughout the iterations, and also characterize the … Read more

An efficient conjugate direction method with orthogonalization for large-scale quadratic optimization problems

A new conjugate direction method is proposed, which is based on an orthogonalization procedure and does not make use of line searches for the conjugate vector set construction. This procedure prevents the set of conjugate vectors from degeneracy and eliminates high sensitivity to computation errors pertinent to methods of conjugate directions, resulting in an efficient … Read more

Trust-region interior-point method for large sparse l_1 optimization.

In this paper, we propose an interior-point method for large sparse l_1 optimization. After a short introduction, the complete algorithm is introduced and some implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus nonconvex problems can be solved successfully. The results of computational experiments given in this … Read more

Sequential Subspace Optimization Method for Large-Scale Unconstrained Problems

We present the Sequential Subspace Optimization (SESOP) method for large scale smooth unconstrained problems. At each iteration we search for a minimum of the objective function over a subspace spanned by the current gradient and by directions of few previous steps. We also include into this subspace the direction from the starting point to the … Read more

Support Vector Machine via Sequential Subspace Optimization

We present an optimization engine for large scale pattern recognition using Support Vector Machine (SVM). Our treatment is based on conversion of soft-margin SVM constrained optimization problem to an unconstrained form, and solving it using newly developed Sequential Subspace Optimization (SESOP) method. SESOP is a general tool for large-scale smooth unconstrained optimization. At each iteration … Read more

A New Low Rank Quasi-Newton Update Scheme for Nonlinear Programming

A new quasi-Newton scheme for updating a low rank positive semi-definite Hessian approximation is described, primarily for use in sequential quadratic programming methods for nonlinear programming. Where possible the symmetric rank one update formula is used, but when this is not possible a new rank two update is used, which is not in the Broyden … Read more

An Extension of the Conjugate Directions Method With Orthogonalization to Large-Scale Problems With Bound Constraints

In our reports on GAMM-04 and ECCOMAS-04 there has been presented a new conjugate directions method for large scale unconstrained minimization problems. High efficiency of this method is ensured by employing an orthogonalization procedure: when constructing the next conjugate vector the component of the gradient is used that is orthogonal to the subspace of preceding … Read more

On the convergence rate of the Cauchy algorithm in the l2 norm

This paper presents a convergence rate for the sequence generated by the Cauchy algorithm. The method is applied to a convex quadratic function with exact line search. Instead of using the norm induced by the hessian matrix, the q-linear convergence is shown for the l2 (or Euclidean) norm. Citation Tecnhical Report, Dep. Mathematics, Federal University … Read more