A new proximal gradient algorithm for solving mixed variational inequality problems with a novel explicit stepsize and applications

In this paper, we propose a new algorithm for solving monotone mixed variational inequality problems in real Hilbert spaces based on proximal gradient method. Our new algorithm uses a novel explicit stepsize which is proved to be increasing to a positive limitation. This property plays an important role in improving the speed of the algorithm. … Read more

A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

\(\) We develop a Levenberg-Marquardt method for minimizing the sum of a smooth nonlinear least-squares term \(f(x) = \frac{1}{2} \|F(x)\|_2^2\) and a nonsmooth term \(h\). Both \(f\) and \(h\) may be nonconvex. Steps are computed by minimizing the sum of a regularized linear least-squares model and a model of \(h\) using a first-order method such … Read more

Accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient

In this paper we develop accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient (LLCG), which is beyond the well-studied class of convex optimization with Lipschitz continuous gradient. In particular, we first consider unconstrained convex optimization with LLCG and propose accelerated proximal gradient (APG) methods for solving it. The proposed APG methods are … Read more

A Proximal Gradient Method for Multi-objective Optimization Problems Using Bregman Functions

In this paper, a globally convergent proximal gradient method is developed for convex multi-objective optimization problems using Bregman distance. The proposed method is free from any kind of a priori chosen parameters or ordering information of objective functions. At every iteration of the proposed method, a subproblem is solved to find a descent direction. This … Read more

A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a … Read more

A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization

We develop a trust-region method for minimizing the sum of a smooth term f and a nonsmooth term h, both of which can be nonconvex. Each iteration of our method minimizes apossibly nonconvex model of f+h in a trust region. The model coincides with f+h in value and subdifferential at the center. We establish global … Read more

A Penalty-free Infeasible Approach for a Class of Nonsmooth Optimization Problems over the Stiefel Manifold

Transforming into an exact penalty function model with convex compact constraints yields efficient infeasible approaches for optimization problems with orthogonality constraints. For smooth and L21-norm regularized cases, these infeasible approaches adopt simple and orthonormalization-free updating schemes and show high efficiency in some numerical experiments. However, to avoid orthonormalization while enforcing the feasibility of the final … Read more

On Inexact Accelerated Proximal Gradient Methods with Relative Error Rules

One of the most popular and important first-order iterations that provides optimal complexity of the classical proximal gradient method (PGM) is the “Fast Iterative Shrinkage/Thresholding Algorithm” (FISTA). In this paper, two inexact versions of FISTA for minimizing the sum of two convex functions are studied. The proposed schemes inexactly solve their subproblems by using relative … Read more

Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems

We understand linear convergence of some first-order methods such as the proximal gradient method (PGM), the proximal alternating linearized minimization (PALM) algorithm and the randomized block coordinate proximal gradient method (R-BCPGM) for minimizing the sum of a smooth convex function and a nonsmooth convex function from a variational analysis perspective. We introduce a new analytic … Read more

Accelerated Bregman Proximal Gradient Methods for Relatively Smooth Convex Optimization

We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. The relatively smooth condition is much weaker than the standard assumption of uniform Lipschitz continuity of the gradients, thus significantly … Read more