Convergence Results for Primal-Dual Algorithms in the Presence of Adjoint Mismatch

Most optimization problems arising in imaging science involve high-dimensional linear operators and their adjoints. In the implementations of these operators, approximations may be introduced for various practical considerations (e.g., memory limitation, computational cost, convergence speed), leading to an adjoint mismatch. This occurs for the X-ray tomographic inverse problems found in Computed Tomography (CT), where the … Read more

A family of accelerated inexact augmented Lagrangian methods with applications to image restoration

In this paper, we focus on a class of convex optimization problems subject to equality or inequality constraints and have developed an Accelerated Inexact Augmented Lagrangian Method (AI-ALM). Different relative error criteria are designed to solve the subproblem of AI-ALM inexactly, and the popular used relaxation step is exploited to accelerate the convergence. By a … Read more

An Asynchronous Proximal Bundle Method

We develop a fully asynchronous proximal bundle method for solving non-smooth, convex optimization problems. The algorithm can be used as a drop-in replacement for classic bundle methods, i.e., the function must be given by a first-order oracle for computing function values and subgradients. The algorithm allows for an arbitrary number of master problem processes computing … Read more

New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization

For minimizing a strongly convex objective function subject to linear inequality constraints, we consider a penalty approach that allows one to utilize stochastic methods for problems with a large number of constraints and/or objective function terms. We provide upper bounds on the distance between the solutions to the original constrained problem and the penalty reformulations, … Read more

A nested primal–dual FISTA-like scheme for composite convex optimization problems

We propose a nested primal–dual algorithm with extrapolation on the primal variable suited for minimizing the sum of two convex functions, one of which is continuously differentiable. The proposed algorithm can be interpreted as an inexact inertial forward–backward algorithm equipped with a prefixed number of inner primal–dual iterations for the proximal evaluation and a “warm–start” … Read more

Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization

For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging. Even when the objective is smooth at every iterate, the corresponding local models are unstable and the number of cutting planes invoked by traditional remedies is … Read more

A New Insight on Augmented Lagrangian Method with Applications in Machine Learning

Motivated by the work [He-Yuan, Balanced augmented Lagrangian method for convex programming, arXiv: 2108.08554v1, (2021)], a novel augmented Lagrangian method with a relaxation step is proposed for solving a family of convex optimization problem subject to equality or inequality constraint. This new method is then extended to solve the multi-block separable convex optimization problem, and … Read more

Accelerated Stochastic Peaceman-Rachford Method for Empirical Risk Minimization

This work is devoted to studying an Accelerated Stochastic Peaceman-Rachford Splitting Method (AS-PRSM) for solving a family of structural empirical risk minimization problems. The objective function to be optimized is the sum of a possibly nonsmooth convex function and a finite-sum of smooth convex component functions. The smooth subproblem in AS-PRSM is solved by a stochastic gradient method using variance reduction … Read more

A Homogeneous Predictor-Corrector Algorithm for Stochastic Nonsymmetric Convex Conic Optimization With Discrete Support

We consider a stochastic convex optimization problem over nonsymmetric cones with discrete support. This class of optimization problems has not been studied yet. By using a logarithmically homogeneous self-concordant barrier function, we present a homogeneous predictor-corrector interior-point algorithm for solving stochastic nonsymmetric conic optimization problems. We also derive an iteration bound for the proposed algorithm. … Read more

Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient

We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized … Read more