Steepest descent method using novel adaptive stepsizes for unconstrained nonlinear multiobjective programming

We propose new adaptive strategies to compute stepsizes for the steepest descent method to solve unconstrained nonlinear multiobjective optimization problems without employing any linesearch procedure. The resulting algorithms can be applied to a wide class of nonconvex unconstrained multi-criteria optimization problems satisfying a global Lipschitz continuity condition imposed on the gradients of all objectives. In a general setting, we prove that from some fixed iteration onwards, the sequences of our stepsizes are increasing to a positive number, and the sequences of the objective values are decreasing. Any accumulation point of the iterates is proved to be a Pareto critical point. In the convex setting of the objective functions, the iterates generated by our algorithms are proved to converge to a weakly efficient point. Additionally, we establish sublinear and linear convergence rates for the convex and strongly convex cases, respectively. To the best of our knowledge, there have been analyses of the global convergence rate for steepest descent algorithms with backtracking linesearch-based stepsizes; however, there have not been any for adaptive stepsizes proposed in the literature. Notably, although the global Lipschitz gradient condition is used for analyzing the convergence properties of our new methods, the computation of our new algorithms themselves do not require calculating or estimating the global Lipschitz constants of the gradients. This and the absence of linesearch backtracking procedures reduce the processing time significantly. The advantages of our new algorithms are further demonstrated by numerical experiments for various test instances when compared to recent methods.

Article

Download

View PDF