A subspace-accelerated split Bregman method for sparse data recovery with joint l1-type regularizers

We propose a subspace-accelerated Bregman method for the linearly constrained minimization of functions of the form f(u)+tau_1 ||u||_1 + tau_2 ||D*u||_1, where f is a smooth convex function and D represents a linear operator, e.g. a finite difference operator, as in anisotropic Total Variation and fused-lasso regularizations. Problems of this type arise in a wide … Read more

A Distributed Quasi-Newton Algorithm for Primal and Dual Regularized Empirical Risk Minimization

We propose a communication- and computation-efficient distributed optimization algorithm using second-order information for solving empirical risk minimization (ERM) problems with a nonsmooth regularization term. Our algorithm is applicable to both the primal and the dual ERM problem. Current second-order and quasi-Newton methods for this problem either do not work well in the distributed setting or … Read more

A robust method based on LOVO functions for solving least squares problems

The robust adjustment of nonlinear models to data is considered in this paper. When data comes from real experiments, it is possible that measurement errors cause the appearance of discrepant values, which should be ignored when adjusting models to them. This work presents a Lower Order-value Optimization (LOVO) version of the Levenberg-Marquardt algorithm, which is … Read more

Data-compatibility of algorithms

The data-compatibility approach to constrained optimization, proposed here, strives to a point that is “close enough” to the solution set and whose target function value is “close enough” to the constrained minimum value. These notions can replace analysis of asymptotic convergence to a solution point of infinite sequences generated by specific algorithms. We consider a … Read more

Dynamic string-averaging CQ-methods for the split feasibility problem with percentage violation constraints arising in radiation therapy treatment planning

In this paper we study a feasibility-seeking problem with percentage violation con- straints. These are additional constraints, that are appended to an existing family of constraints, which single out certain subsets of the existing constraints and declare that up to a speci ed fraction of the number of constraints in each subset is allowed to be … Read more

A search direction inspired primal-dual method for saddle point problems

The primal-dual hybrid gradient algorithm (PDHG), which is indeed the Arrow-Hurwicz method, has been widely used in image processing areas. However, the convergence of PDHG was established only under some restrictive conditions in the literature, and it is still missing for the case without extra constraints. In this paper, from a perspective of the variational … Read more

Deriving Solution Value Bounds from the ADMM

This short paper describes a simple subgradient-based techniques for deriving bounds on the optimal solution value when using the ADMM to solve convex optimization problems. The technique requires a bound on the magnitude of some optimal solution vector, but is otherwise completely general. Some computational examples using LASSO problems demonstrate that the technique can produce … Read more

An inexact augmented Lagrangian method for nonsmooth optimization on Riemannian manifold

We consider a nonsmooth optimization problem on Riemannian manifold, whose objective function is the sum of a differentiable component and a nonsmooth convex function. We propose a manifold inexact augmented Lagrangian method (MIALM) for the considered problem. The problem is reformulated to a separable form. By utilizing the Moreau envelope, we get a smoothing subproblem … Read more

Superiorization vs. Accelerated Convex Optimization: The Superiorized/Regularized Least-Squares Case

In this paper we conduct a study of both superiorization and optimization approaches for the reconstruction problem of superiorized/regularized solutions to underdetermined systems of linear equations with nonnegativity variable bounds. Specifically, we study a (smoothed) total variation regularized least-squares problem with nonnegativity constraints. We consider two approaches: (a) a superiorization approach that, in contrast to … Read more

Understanding Limitation of Two Symmetrized Orders by Worst-case Complexity

It was recently found that the standard version of multi-block cyclic ADMM diverges. Interestingly, Gaussian Back Substitution ADMM (GBS-ADMM) and symmetric Gauss-Seidel ADMM (sGS-ADMM) do not have the divergence issue. Therefore, it seems that symmetrization can improve the performance of the classical cyclic order. In another recent work, cyclic CD (Coordinate Descent) was shown to … Read more