Exact computation of an error bound for a generalized linear complementarity problem with unique solution

This paper considers a generalized form of the standard linear complementarity problem with unique solution and provides a more precise expression of an upper error bound discovered by Chen and Xiang in 2006. This expression has at least two advantages. It makes possible the exact computation of the error bound factor and it provides a … Read more

Scalable adaptive cubic regularization methods

Adaptive cubic regularization (ARC) methods for unconstrained optimization compute steps from linear systems involving a shifted Hessian in the spirit of the Levenberg-Marquardt and trust-region methods. The standard approach consists in performing an iterative search for the shift akin to solving the secular equation in trust-region methods. Such search requires computing the Cholesky factorization of … Read more

A lower bound on the iterative complexity of the Harker and Pang globalization technique of the Newton-min algorithm for solving the linear complementarity problem

The plain Newton-min algorithm for solving the linear complementarity problem (LCP) 0 ≤ x ⊥ (Mx+q) ≥ 0 can be viewed as an instance of the plain semismooth Newton method on the equational version min(x,Mx+q) = 0 of the problem. This algorithm converges for any q when M is an M-matrix, but not when it … Read more

The New Butterfly Relaxation Methods for Mathematical Program with Complementarity Constraints

We propose a new family of relaxation schemes for mathematical program with complementarity constraints that extends the relaxations of Kadrani, Dussault, Bechakroun from 2009 and the one of Kanzow \& Schwartz from 2011. We discuss the properties of the sequence of relaxed non-linear program as well as stationarity properties of limiting points. A sub-family of … Read more

How to Compute a M-stationary point of the MPCC

We discuss here the convergence of relaxation methods for MPCC with approximate sequence of stationary points by presenting a general framework to study these methods. It has been pointed out in the literature, \cite{kanzow2015}, that relaxation methods with approximate stationary points fail to give guarantee of convergence. We show that by defining a new strong … Read more

A note on robust descent in differentiable optimization

In this note, we recall two solutions to alleviate the catastrophic cancellations that occur when comparing function values in descent algorithms. The automatic finite differencing approach (Dussault and Hamelin) was shown useful to trust region and line search variants. The main original contribution is to successfully adapt the line search strategy (Hager and Zhang) for … Read more

Solving trajectory optimization problems via nonlinear programming: the brachistochrone case study

This note discusses reformulations the brachistochrone problem suitable for solution via NLP. The availability of solvers and modeling languages such as AMPL makes it tempting to formulate discretized optimization problems and get solutions to the discretized versions of trajectory optimization problems. We use the famous brachistochrone problem to warn that the resulting solutions may be … Read more

A new family of high order directions for unconstrained optimization inspired by Chebyshev and Shamanskii methods

The 1669-1670 Newton-Raphson’s method is still used to solve equations systems and unconstrained optimization problems. Since this method, some other algorithms inspired by Newton’s have been proposed: in 1839 Chebyshev developped a high order cubical convergence algorithm, and in 1967 Shamanskii proposed an acceleration of Newton’s method. By considering a Newton-type methods as displacement directions, … Read more

Local path-following property of inexact interior methods in nonlinear programming

We study the local behavior of a primal-dual inexact interior point methods for solving nonlinear systems arising from the solution of nonlinear optimization problems or more generally from nonlinear complementarity problems. The algorithm is based on the Newton method applied to a sequence of perturbed systems that follows by perturbation of the complementarity equations of … Read more