Newton’s Method for Multiobjective Optimization

We propose an extension of Newton's Method for unconstrained multiobjective optimization (multicriteria optimization). The method does not scalarize the original vector optimization problem, i.e. we do not make use of any of the classical techniques that transform a multiobjective problem into a family of standard optimization problems. Neither ordering information nor weighting factors for the different objective functions need to be known. The objective functions are assumed to be twice continuously differentiable. Under these hypotheses, the method, as in the classical case, is locally superlinear convergent to optimal points. Again as in the scalar case, if the second derivatives are Lipschitz continuous, the rate of convergence is quadratic. This is the first time that a method for an optimization problem with an objective function with a partially ordered vector space as a codomain is considered and convergence results of this order are provided. Our convergence analysis uses a Kantorovich-like technique. As a byproduct, existence of optima is obtained under semi-local assumptions.



View Newton's Method for Multiobjective Optimization