A Proximal Point Algorithm with phi-Divergence to Quasiconvex Programming

We use the proximal point method with the phi-divergence given by phi(t) = t – log t – 1 for the minimization of quasiconvex functions subject to nonnegativity constraints. We establish that the sequence generated by our algorithm is well-defined in the sense that it exists and it is not cyclical. Without any assumption of … Read more

Variational Problems in Quasi-Newton Methods

It has been known since the early 1970s that the Hessian matrices in quasi-Newton methods can be updated by variational means, in several different ways. The usual formulation of these variational problems uses a coordinate system, and the symmetry of the Hessian matrices are enforced as explicit constraints. As a result, the variational problems seem … Read more

Computing Minimum Volume Enclosing Axis-Aligned Ellipsoids

Given a set of points $\cS = \{x^1,\ldots,x^m\} \subset \R^n$ and $\eps > 0$, we propose and analyze an algorithm for the problem of computing a $(1 + \eps)$-approximation to the the minimum volume axis-aligned ellipsoid enclosing $\cS$. We establish that our algorithm is polynomial for fixed $\eps$. In addition, the algorithm returns a small … Read more

Numerical Experience with a Recursive Trust-Region Method for Multilevel Nonlinear Optimization

We consider an implementation of the recursive multilevel trust-region algorithm proposed by Gratton, Sartenaer, Toint (2004), and provide significant numerical experience on multilevel test problems. A suitable choice of the algorithm’s parameters is identified on these problems, yielding a very satisfactory compromise between reliability and efficiency. The resulting default algorithm is then compared to alternative … Read more

An Extension of the Proximal Point Method for Quasiconvex Minimization

In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex objective functions on the Euclidean space and the nonnegative orthant. For the unconstrained minimization problem, assumming that the function is bounded from below and lower semicontinuous we prove that iterations fxkg given by 0 2 b@(f(:)+(k=2)jj:􀀀xk􀀀1jj2)(xk) are … Read more

Convergence analysis of Riemannian trust-region methods

A general scheme for trust-region methods on Riemannian manifolds is proposed and analyzed. Among the various approaches available to (approximately) solve the trust-region subproblems, particular attention is paid to the truncated conjugate-gradient technique. The method is illustrated on problems from numerical linear algebra. Citation19 June 2006ArticleDownload View PDF

An Accelerated Newton Method for Equations with Semismooth Jacobians and Nonlinear Complementarity Problems: Extended Version

We discuss local convergence of Newton’s method to a singular solution $x^*$ of the nonlinear equations $F(x) = 0$, for $F:\R^n \rightarrow \R^n$. It is shown that an existing proof of Griewank, concerning linear convergence to a singular solution $x^*$ from a starlike domain around $x^*$ for $F$ twice Lipschitz continuously differentiable and $x^*$ satisfying … Read more

On the divergence of line search methods

We discuss the convergence of line search methods for minimization. We explain how Newton’s method and the BFGS method can fail even if the restrictions of the objective function to the search lines are strictly convex functions, the level sets of the objective functions are compact, the line searches are exact and the Wolfe conditions … Read more

Perturbed projections and subgradient projections for the multiple-sets split feasibility problem

We study the multiple-sets split feasibility problem that requires to find a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. By casting the problem into an equivalent problem in … Read more

Recognizing Underlying Sparsity in Optimization

Exploiting sparsity is essential to improve the efficiency of solving large optimization problems. We present a method for recognizing the underlying sparsity structure of a nonlinear partially separable problem, and show how the sparsity of the Hessian matrices of the problem’s functions can be improved by performing a nonsingular linear transformation in the space corresponding … Read more