A conjugate directions-type procedure for quadratic multiobjective optimization

We propose an extension of the real-valued conjugate directions method for unconstrained quadratic multiobjective problems. As in the single-valued counterpart, the procedure requires a set of directions that are simultaneously conjugate with respect to the positive definite matrices of all quadratic objective components. Likewise, the multicriteria version computes the steplength by means of the unconstrained … Read more

A barrier-type method for multiobjective optimization

For solving constrained multicriteria problems, we introduce the multiobjective barrier method (MBM), which extends the scalar-valued internal penalty method. This multiobjective version of the classical method also requires a penalty barrier for the feasible set and a sequence of nonnegative penalty parameters. Differently from the single-valued procedure, MBM is implemented by means of an auxiliary … Read more

An external penalty-type method for multicriteria

We propose an extension of the classical real-valued external penalty method to the multicriteria optimization setting. As its single objective counterpart, it also requires an external penalty function for the constraint set, as well as an exogenous divergent sequence of nonnegative real numbers, the so-called penalty parameters, but, differently from the scalar procedure, the vector-valued … Read more

A quadratically convergent Newton method for vector optimization

We propose a Newton method for solving smooth unconstrained vector optimization problems under partial orders induced by general closed convex pointed cones. The method extends the one proposed by Fliege, Grana Drummond and Svaiter for multicriteria, which in turn is an extension of the classical Newton method for scalar optimization. The steplength is chosen by … Read more

Inexact projected gradient method for vector optimization

In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graña Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graña Drummond and Iusem, since it admits relative … Read more

On the convergence of the projected gradient method for vector optimization

In 2004, Graña Drummond and Iusem proposed an extension of the projected gradient method for constrained vector optimization problems. In that method, an Armijo-like rule, implemented with a backtracking procedure, was used in order to determine the steplengths. The authors just showed stationarity of all cluster points and, for another version of the algorithm (with … Read more

Newton’s Method for Multiobjective Optimization

We propose an extension of Newton’s Method for unconstrained multiobjective optimization (multicriteria optimization). The method does not scalarize the original vector optimization problem, i.e. we do not make use of any of the classical techniques that transform a multiobjective problem into a family of standard optimization problems. Neither ordering information nor weighting factors for the … Read more