Majorization-Minimization (MM) consists of a class of efficient and effective optimization algorithms that benefit from solid theoretical foundations. MM methods have shown their great ability to tackle efficiently challenging optimization problems from signal processing, image processing, inverse problems and machine learning. When processing large amount of data/variable, as it may happen in 3D image processing, video processing, or spectroscopy, using standard optimization algorithms becomes difficult due to memory limitation issues. To limit the dependence of an optimization algorithm on the dimension of the problem, block alternating algorithms have been developed where, at each iteration, only a subset of the variables is updated. This paper provides new insights about the convergence guarantees of block alternating MM schemes, in the challenging non-convex setting, and when subspace acceleration is possibly incorporated to improve the convergence rate.
Article
View Convergence Analysis of Block Majorize-Minimize Subspace Approaches