Riemannian Interior Point Methods for Constrained Optimization on Manifolds

We extend the classical primal-dual interior point method from the Euclidean setting to the Riemannian one. Our method, named the Riemannian interior point method (RIPM), is for solving Riemannian  constrained optimization problems. We establish its local superlinear and quadratic convergence  under the standard assumptions. Moreover, we show its global convergence when it is combined with … Read more

A Constraint-Reduced Algorithm for Semidefinite Optimization Problems with Superlinear Convergence

Constraint reduction is an essential method because the computational cost of the interior point methods can be effectively saved. Park and O’Leary proposed a constraint-reduced predictor-corrector algorithm for semidefinite programming with polynomial global convergence, but they did not show its superlinear convergence. We first develop a constraint-reduced algorithm for semidefinite programming having both polynomial global … Read more

A Trust Region Algorithm with a Worst-Case Iteration Complexity of ${\cal O}(\epsilon^{-3/2})$ for Nonconvex Optimization

We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $\bar\epsilon \in (0,\infty)$, the algorithm requires at most $\mathcal{O}(\epsilon^{-3/2})$ iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $\epsilon \in (0,\bar\epsilon]$. This improves upon the $\mathcal{O}(\epsilon^{-2})$ bound known to hold for … Read more

On local convergence of the method of alternating projections

The method of alternating projections is a classical tool to solve feasibility problems. Here we prove local convergence of alternating projections between subanalytic sets A,B under a mild regularity hypothesis on one of the sets. We show that the speed of convergence is O$(k^{-\rho})$ for some $\rho\in(0,\infty)$. Citation Université de Toulouse, Institut de Mathématiques, december … Read more

Algebraic rules for quadratic regularization of Newton’s method

In this work we propose a class of quasi-Newton methods to minimize a twice differentiable function with Lipschitz continuous Hessian. These methods are based on the quadratic regularization of Newton’s method, with algebraic explicit rules for computing the regularizing parameter. The convergence properties of this class of methods are analysed. We show that if the … Read more

Abstract Newtonian Frameworks and Their Applications

We unify and extend some Newtonian iterative frameworks developed earlier in the literature, which results in a collection of convenient tools for local convergence analysis of various algorithms under various sets of assumptions including strong metric regularity, semistability, or upper-Lipschizt stability, the latter allowing for nonisolated solutions. These abstract schemes are further applied for deriving … Read more

Nonmonotone Filter Method for Nonlinear Optimization

We propose a new nonmonotone filter method to promote global and fast local convergence for sequential quadratic programming algorithms. Our method uses two filters: a global g-filter for global convergence, and a local nonmonotone l-filter that allows us to establish fast local convergence. We show how to switch between the two filters efficiently, and we … Read more

A Derivative-Free Algorithm for the Least-square minimization

We develop a framework for a class of derivative-free algorithms for the least-squares minimization problem. These algorithms are based on polynomial interpolation models and are designed to take advantages of the problem structure. Under suitable conditions, we establish the global convergence and local quadratic convergence properties of these algorithms. Promising numerical results indicate the algorithm … Read more

Convergence analysis of Riemannian trust-region methods

A general scheme for trust-region methods on Riemannian manifolds is proposed and analyzed. Among the various approaches available to (approximately) solve the trust-region subproblems, particular attention is paid to the truncated conjugate-gradient technique. The method is illustrated on problems from numerical linear algebra. Citation 19 June 2006 Article Download View Convergence analysis of Riemannian trust-region … Read more

Local Analysis of the Feasible Primal-Dual Interior-Point Method

In this paper we analyze the rate of local convergence of the Newton primal-dual interior-point method when the iterates are kept strictly feasible with respect to the inequality constraints. It is shown under the classical conditions that the rate is q-quadratic when the functions associated to the inequality constraints define a locally concave feasible region. … Read more