MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization

We study the combination of proximal gradient descent with multigrid for solving a class of possibly nonsmooth strongly convex optimization problems. We propose a multigrid proximal gradient method called MG-Prox, which accelerates the proximal gradient method by multigrid, based on using hierarchical information of the optimization problem. MGProx applies a newly introduced adaptive restriction operator to simplify the Minkowski sum of subdifferentials of the nondifferentiable objective function across different levels. We provide a theoretical characterization of MGProx. First we show that the MGProx update operator exhibits a fixed-point property. Next, we show that the coarse correction is a descent direction for the fine variable of the original fine level problem in the general nonsmooth case. Lastly, under some assumptions we provide the convergence rate for the algorithm. In the numerical tests on the Elastic Obstacle Problem, which is an example of nonsmooth convex optimization problem where multigrid method can be applied, we show that MGProx has a faster convergence speed than competing methods.

Article

Download

View MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization