Inexact FISTA-like Methods with Adaptive Backtracking

Accelerated proximal gradient methods have become a useful tool in large-scale convex optimization, specially for variational regularization with non-smooth priors. Prevailing convergence analysis considers that users can perform the proximal and the gradient steps exactly. Still, in some practical applications, the proximal or the gradient steps must be computed inexactly, which can harm convergence speed or even lead to a divergent behavior.

Researchers have developed theoretical frameworks for convergence under the inexactness of computations to handle this issue. For example, Bello-Cruz et al. developed a relative error rule to be used with inexact FISTA (Fast Iterative Soft-Thresholding Algorithm). Their technique works whenever suitable stepsizes are known. In the present paper, we study the convergence of inexact FISTA with line search. We develop an adaptive line search without demanding any additional effort as compared with monotone line search approaches. The computational performance is illustrated for an important category of problems with simulated tomographic data. Theory and experimentation show that the new methods are suitable for variational regularization of inverse problems.

Article

Download

View PDF