Stochastic alternating algorithms for bi-objective optimization are considered when optimizing two conflicting functions for which optimization steps have to be applied separately for each function. Such algorithms consist of applying a certain number of steps of gradient or subgradient descent on each single objective at each iteration. In this paper, we show that stochastic alternating algorithms achieve a sublinear convergence rate of O(1/T), under strong convexity, for the determination of a minimizer of a weighted-sum of the two functions, parameterized by the number of steps applied on each of them. An extension to the convex case is presented for which the rate weakens to O(1/sqrt(T)). These rates are valid also in the non-smooth case. Importantly, by varying the proportion of steps applied to each function, one can determine an approximation to the Pareto front.

## Citation

S. Liu and L. N. Vicente, Convergence rates of the stochastic alternating algorithm for bi-objective optimization, ISE Technical Report 22T-004, Lehigh University.

## Article

View Convergence rates of the stochastic alternating algorithm for bi-objective optimization