Double-proximal augmented Lagrangian methods with improved convergence condition

In this paper, we consider a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. A basic double-proximal augmented Lagrangian method (DP-ALM) is developed, which enjoys a flexible dual stepsize and a proximal subproblem with relatively smaller proximal parameter. By a novel prediction-correction reformulation for the proposed DP-ALM and by … Read more

Generalized asymmetric forward-backward-adjoint algorithms for convex-concave saddle-point problem

The convex-concave minimax problem, also known as the saddle-point problem, has been extensively studied from various aspects including the algorithm design, convergence condition and complexity. In this paper, we propose a generalized asymmetric forward-backward-adjoint algorithm (G-AFBA) to solve such a problem by utilizing both the proximal techniques and the extrapolation of primal-dual updates. Besides applying … Read more