A New Preconditioning Approach for an Interior Point-Proximal Method of Multipliers for Linear and Convex Quadratic Programming

In this paper, we address the efficient numerical solution of linear and quadratic programming problems, often of large scale. With this aim, we devise an infeasible interior point method, blended with the proximal method of multipliers, which in turn results in a primal-dual regularized interior point method. Application of this method gives rise to a … Read more

A subspace-accelerated split Bregman method for sparse data recovery with joint l1-type regularizers

We propose a subspace-accelerated Bregman method for the linearly constrained minimization of functions of the form f(u)+tau_1 ||u||_1 + tau_2 ||D*u||_1, where f is a smooth convex function and D represents a linear operator, e.g. a finite difference operator, as in anisotropic Total Variation and fused-lasso regularizations. Problems of this type arise in a wide … Read more

Data-compatibility of algorithms

The data-compatibility approach to constrained optimization, proposed here, strives to a point that is “close enough” to the solution set and whose target function value is “close enough” to the constrained minimum value. These notions can replace analysis of asymptotic convergence to a solution point of infinite sequences generated by specific algorithms. We consider a … Read more

Dynamic string-averaging CQ-methods for the split feasibility problem with percentage violation constraints arising in radiation therapy treatment planning

In this paper we study a feasibility-seeking problem with percentage violation con- straints. These are additional constraints, that are appended to an existing family of constraints, which single out certain subsets of the existing constraints and declare that up to a speci ed fraction of the number of constraints in each subset is allowed to be … Read more

A search direction inspired primal-dual method for saddle point problems

The primal-dual hybrid gradient algorithm (PDHG), which is indeed the Arrow-Hurwicz method, has been widely used in image processing areas. However, the convergence of PDHG was established only under some restrictive conditions in the literature, and it is still missing for the case without extra constraints. In this paper, from a perspective of the variational … Read more

Deriving Solution Value Bounds from the ADMM

This short paper describes a simple subgradient-based techniques for deriving bounds on the optimal solution value when using the ADMM to solve convex optimization problems. The technique requires a bound on the magnitude of some optimal solution vector, but is otherwise completely general. Some computational examples using LASSO problems demonstrate that the technique can produce … Read more

Superiorization vs. Accelerated Convex Optimization: The Superiorized/Regularized Least-Squares Case

In this paper we conduct a study of both superiorization and optimization approaches for the reconstruction problem of superiorized/regularized solutions to underdetermined systems of linear equations with nonnegativity variable bounds. Specifically, we study a (smoothed) total variation regularized least-squares problem with nonnegativity constraints. We consider two approaches: (a) a superiorization approach that, in contrast to … Read more

Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a common random number framework. We employ modified versions of a norm test and an inner product quasi-Newton test … Read more

Adaptive Gradient Descent without Descent

We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) don’t increase the stepsize too fast and 2) don’t overstep the local curvature. No need for functional values, no line search, no information about the function except for the gradients. By following these rules, you get a method adaptive … Read more

An Oblivious Ellipsoid Algorithm for Solving a System of (In)Feasible Linear Inequalities

The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of m linear inequalities in n variables (P) when its set of solutions has positive volume. However, when (P) is infeasible, the ellipsoid algorithm has no mechanism for proving that (P) is infeasible. This is in contrast to the other two … Read more