All roads lead to Newton: Feasible second-order methods for equality-constrained optimization

This paper considers the connection between the intrinsic Riemannian Newton method and other more classically inspired optimization algorithms for equality-constrained optimization problems. We consider the feasibly-projected sequential quadratic programming (FP-SQP) method and show that it yields the same update step as the Riemannian Newton, subject to a minor assumption on the choice of multiplier vector. … Read more

Local and superlinear convergence of a primal-dual interior point method for nonlinear semidefinite programming

In this paper, we consider a primal-dual interior point method for solving nonlinear semidefinite programming problems. We propose primal-dual interior point methods based on the unscaled and scaled Newton methods, which correspond to the AHO, HRVW/KSH/M and NT search directions in linear SDP problems. We analyze local behavior of our proposed methods and show their … Read more

Standard Bi-Quadratic Optimization Problems and Unconstrained Polynomial Reformulations

A so-called Standard Bi-Quadratic Optimization Problem (StBQP) consists in minimizing a bi-quadratic form over the Cartesian product of two simplices (so this is different from a Bi-Standard QP where a quadratic function is minimized over the same set). An application example arises in portfolio selection. In this paper we present a bi-quartic formulation of StBQP, … Read more

An Augmented Lagrangian Approach for Sparse Principal Component Analysis

Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To … Read more

On the global convergence of interior-point nonlinear programming algorithms

Carathéodory’s lemma states that if we have a linear combination of vectors in R^n, we can rewrite this combination using a linearly independent subset. This result has been successfully applied in nonlinear optimization in many contexts. In this work we present a new version of this celebrated theorem, in which we obtained new bounds for … Read more

Switching stepsize strategies for PDIP

In this chapter we present a primal-dual interior point algorithm for solving constrained nonlinear programming problems. Switching rules are implemented that aim at exploiting the merits and avoiding the drawbacks of three different merit functions. The penalty parameter is determined using an adaptive penalty strategy that ensures a descent property for the merit function. The … Read more

A practical method for solving large-scale TRS

We present a nearly-exact method for the large scale trust region subproblem (TRS) based on the properties of the minimal-memory BFGS method. Our study in concentrated in the case where the initial BFGS matrix can be any scaled identity matrix. The proposed method is a variant of the Mor\'{e}-Sorensen method that exploits the eigenstructure of … Read more

Interior-point method for nonlinear programming with complementarity constraints

In this report, we propose an algorithm for solving nonlinear programming problems with com-plementarity constraints, which is based on the interior-point approach. Main theoretical results concern direction determination and step-length selection. We use an exact penalty function to remove complementarity constraints. Thus a new indefinite linear system is defined with a tridiagonal low-right submatrix. Inexact … Read more

An adaptive cubic regularisation algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in … Read more

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

We propose a numerical algorithm for solving smooth nonlinear programming problems with a large number of constraints, but a moderate number of variables. The active set method proceeds from a given bound mw for the maximum number of expected violated constraints, where mw is a user-provided parameter less than the total number of constraints. A … Read more