A Local MM Subspace Method for Solving Constrained Variational Problems in Image Recovery

This article introduces a new Penalized Majorization-Minimization Subspace algorithm (P-MMS) for solving smooth, constrained optimization problems. In short, our approach consists of embedding a subspace algorithm in an inexact exterior penalty procedure. The subspace strategy, combined with a Majoration-Minimization step-size search, takes great advantage of the smoothness of the penalized cost function, while the penalty … Read more

SABRINA: A Stochastic Subspace Majorization-Minimization Algorithm

A wide class of problems involves the minimization of a coercive and differentiable function $F$ on $\mathbb{R}^N$ whose gradient cannot be evaluated in an exact manner. In such context, many existing convergence results from standard gradient-based optimization literature cannot be directly applied and robustness to errors in the gradient is not necessarily guaranteed. This work … Read more

A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer

We consider the problem of minimizing an objective function that is the sum of a convex function and a group sparsity-inducing regularizer. Problems that integrate such regularizers arise in modern machine learning applications, often for the purpose of obtaining models that are easier to interpret and that have higher predictive accuracy. We present a new … Read more

A subspace-accelerated split Bregman method for sparse data recovery with joint l1-type regularizers

We propose a subspace-accelerated Bregman method for the linearly constrained minimization of functions of the form f(u)+tau_1 ||u||_1 + tau_2 ||D*u||_1, where f is a smooth convex function and D represents a linear operator, e.g. a finite difference operator, as in anisotropic Total Variation and fused-lasso regularizations. Problems of this type arise in a wide … Read more

Accelerated line-search and trust-region methods

In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. Here we consider “accelerated” versions of these methods, where the conventional iterate is allowed to be replaced by any point that produces at least as much decrease in the cost function as a fixed fraction of … Read more