Worst-Case Complexity of High-Order Algorithms for Pareto-Front Reconstruction

In this paper, we are concerned with a worst-case complexity analysis of a-posteriori algorithms for unconstrained multiobjective optimization.
Specifically, we propose an algorithmic framework that generates sets of points by means of $p$th-order models regularized with a power $p+1$ of the norm of the step.
Through a tailored search procedure, several trial points are generated at each iteration and they can be added to the current set if a decrease is obtained for at least one objective function.
Building upon this idea, we devise two algorithmic versions: at every iteration, the first tries to update all points in the current set, while the second tries to update only one point.
Under Lipschitz continuity of the derivatives of the objective functions, we derive worst-case complexity bounds for both versions.
For the first one, we show that at most $\mathcal O(\epsilon^{m(p+1)/p})$ iterations are needed to generate a set where all points are $\epsilon$-approximate Pareto-stationary, with $m$ being the number of objective functions, requiring at most $\mathcal O(|X(\epsilon)|\epsilon^{m(p+1)/p})$ function evaluations, where $X(\epsilon)$ is the largest set of points computed by the algorithm. Additionally, at most $\mathcal O(\epsilon^{(p+1)/p})$ iterations are needed to generate at least one $\epsilon$-approximate Pareto-stationary point, requiring at most $\mathcal O(|X(\epsilon)|\epsilon^{(p+1)/p})$ function evaluations.
For the second version, we get $\mathcal O(\epsilon^{m(p+1)/p})$ worst-case complexity bounds on the number of iterations and function evaluations for generating at least one $\epsilon$-approximate Pareto-stationary point.
Our results align with those for single objective optimization and generalize those obtained for methods that produce a single Pareto-stationary point.

Article

Download

View PDF