A proximal-perturbed Bregman ADMM for solving nonsmooth and nonconvex optimization problems

In this paper, we focus on a linearly constrained composite minimization problem whose objective function is possibly nonsmooth and nonconvex. Unlike the traditional construction of augmented Lagrangian function, we provide a proximal-perturbed augmented Lagrangian and then develop a new Bregman Alternating Direction Method of Multipliers (ADMM). Under mild assumptions, we show that the novel augmented … Read more

Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms

Matrix Factorization is a popular non-convex objective, for which alternating minimization schemes are mostly used. They usually suffer from the major drawback that the solution is biased towards one of the optimization variables. A remedy is non-alternating schemes. However, due to a lack of Lipschitz continuity of the gradient in matrix factorization problems, convergence cannot … Read more

Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization

Backtracking line-search is an old yet powerful strategy for finding better step size to be used in proximal gradient algorithms. The main principle is to locally find a simple convex upper bound of the objective function, which in turn controls the step size that is used. In case of inertial proximal gradient algorithms, the situation … Read more

A unified framework for Bregman proximal methods: subgradient, gradient, and accelerated gradient schemes

We provide a unified framework for analyzing the convergence of Bregman proximal first-order algorithms for convex minimization. Our framework hinges on properties of the convex conjugate and gives novel proofs of the convergence rates of the Bregman proximal subgradient, Bregman proximal gradient, and a new accelerated Bregman proximal gradient algorithm under fairly general and mild … Read more

Accelerated Bregman Proximal Gradient Methods for Relatively Smooth Convex Optimization

We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. The relatively smooth condition is much weaker than the standard assumption of uniform Lipschitz continuity of the gradients, thus significantly … Read more

First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems

We focus on nonconvex and nonsmooth minimization problems with a composite objective, where the differentiable part of the objective is freed from the usual and restrictive global Lipschitz gradient continuity assumption. This longstanding smoothness restriction is pervasive in first order methods (FOM), and was recently circumvent for convex composite optimization by Bauschke, Bolte and Teboulle, … Read more

Linear Convergence of Proximal Incremental Aggregated Gradient Methods under Quadratic Growth Condition

Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition{a strictly weaker condition than the strongly convex assumption, … Read more

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

We propose a forward-backward proximal-type algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. The sequence of iterates generated by the algorithm converges to a critical point of the objective function provided an appropriate regularization of the objective satisfies the Kurdyka-Lojasiewicz inequality, which is … Read more

An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems

We investigate the convergence of a forward-backward-forward proximal-type algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka-\L{}ojasiewicz inequality, which is for instance fulfilled for semi-algebraic functions. ArticleDownload … Read more

Mosco stability of proximal mappings in reflexive Banach spaces

In this paper we establish criteria for the stability of the proximal mapping \textrm{Prox} $_{\varphi }^{f}=(\partial \varphi +\partial f)^{-1}$ associated to the proper lower semicontinuous convex functions $\varphi $ and $f$ on a reflexive Banach space $X.$ We prove that, under certain conditions, if the convex functions $\varphi _{n}$ converge in the sense of Mosco … Read more