Convergence Results for Primal-Dual Algorithms in the Presence of Adjoint Mismatch

Most optimization problems arising in imaging science involve high-dimensional linear operators and their adjoints. In the implementations of these operators, approximations may be introduced for various practical considerations (e.g., memory limitation, computational cost, convergence speed), leading to an adjoint mismatch. This occurs for the X-ray tomographic inverse problems found in Computed Tomography (CT), where the … Read more

A family of accelerated inexact augmented Lagrangian methods with applications to image restoration

In this paper, we focus on a class of convex optimization problems subject to equality or inequality constraints and have developed an Accelerated Inexact Augmented Lagrangian Method (AI-ALM). Different relative error criteria are designed to solve the subproblem of AI-ALM inexactly, and the popular used relaxation step is exploited to accelerate the convergence. By a … Read more

Spectral Projected Subgradient Method for Nonsmooth Convex Optimization Problems

We consider constrained optimization problems with a nonsmooth objective function in the form of mathematical expectation. The Sample Average Approximation (SAA) is used to estimate the objective function and variable sample size strategy is employed. The proposed algorithm combines an SAA subgradient with the spectral coefficient in order to provide a suitable direction which improves … Read more

Limits of eventual families of sets with application to algorithms for the common fixed point problem

We present an abstract framework for asymptotic analysis of convergence based on the notions of eventual families of sets that we define. A family of subsets of a given set is called here an “eventual family” if it is upper hereditary with respect to inclusion. We define accumulation points of eventual families in a Hausdorff … Read more

A line search based proximal stochastic gradient algorithm with dynamical variance reduction

Many optimization problems arising from machine learning applications can be cast as the minimization of the sum of two functions: the first one typically represents the expected risk, and in practice it is replaced by the empirical risk, and the other one imposes a priori information on the solution. Since in general the first term … Read more

Adaptive Third-Order Methods for Composite Convex Optimization

In this paper we propose third-order methods for composite convex optimization problems in which the smooth part is a three-times continuously differentiable function with Lipschitz continuous third-order derivatives. The methods are adaptive in the sense that they do not require the knowledge of the Lipschitz constant. Trial points are computed by the inexact minimization of … Read more

New Penalized Stochastic Gradient Methods for Linearly Constrained Strongly Convex Optimization

For minimizing a strongly convex objective function subject to linear inequality constraints, we consider a penalty approach that allows one to utilize stochastic methods for problems with a large number of constraints and/or objective function terms. We provide upper bounds on the distance between the solutions to the original constrained problem and the penalty reformulations, … Read more

An inexact ADMM with proximal-indefinite term and larger stepsize

In this paper, an inexact Alternating Direction Method of Multipliers (ADMM) has been proposed for solving the two-block separable convex optimization problem subject to linear equality constraints. The first resulting subproblem is solved inexactly under relative error criterion, while another subproblem called regularization problem is solved inexactly by introducing an indefinite proximal term. Meanwhile, the … Read more

Affine invariant convergence rates of the conditional gradient method

We show that the conditional gradient method for the convex composite problem \[\min_x\{f(x) + \Psi(x)\}\] generates primal and dual iterates with a duality gap converging to zero provided a suitable growth property holds and the algorithm makes a judicious choice of stepsizes. The rate of convergence of the duality gap to zero ranges from sublinear … Read more

Unmatched Preconditioning of the Proximal Gradient Algorithm

This works addresses the resolution of penalized least-squares problems using the proximal gradient algorithm (PGA). It is known that PGA can be accelerated by preconditioning strategies. However, typical effective choices of preconditioners may correspond to intricate matrices that are not easily inverted, and lead to an increased complexity in the computation of the proximity step. … Read more