Scaled Proximal Gradient Methods for Multiobjective Optimization: Improved Linear Convergence and Nesterov’s Acceleration

Over the past two decades, descent methods have received substantial attention within the multiobjective optimization field. Nonetheless, both theoretical analyses and empirical evidence reveal that existing first-order methods for multiobjective optimization converge slowly, even for well-conditioned problems, due to the objective imbalances. To address this limitation, we incorporate curvature information to scale each objective within … Read more

Composite optimization models via proximal gradient method with a novel enhanced adaptive stepsize

We first consider the convex composite optimization models with the local Lipschitzness condition imposed on the gradient of the differentiable term. The classical proximal gradient method will be studied with our novel enhanced adaptive stepsize selection. To obtain the convergence of the proposed algorithm, we establish a sufficient decrease type inequality associated with our new … Read more

A new proximal gradient algorithm for solving mixed variational inequality problems with a novel explicit stepsize and applications

In this paper, we propose a new algorithm for solving monotone mixed variational inequality problems in real Hilbert spaces based on proximal gradient method. Our new algorithm uses a novel explicit stepsize which is proved to be increasing to a positive value. This property plays an important role in improving the speed of the algorithm. … Read more

A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

\(\) We develop a Levenberg-Marquardt method for minimizing the sum of a smooth nonlinear least-squares term \(f(x) = \frac{1}{2} \|F(x)\|_2^2\) and a nonsmooth term \(h\). Both \(f\) and \(h\) may be nonconvex. Steps are computed by minimizing the sum of a regularized linear least-squares model and a model of \(h\) using a first-order method such … Read more

Accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient

In this paper we develop accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient (LLCG), which is beyond the well-studied class of convex optimization with Lipschitz continuous gradient. In particular, we first consider unconstrained convex optimization with LLCG and propose accelerated proximal gradient (APG) methods for solving it. The proposed APG methods are … Read more

A Proximal Gradient Method for Multi-objective Optimization Problems Using Bregman Functions

In this paper, a globally convergent proximal gradient method is developed for convex multi-objective optimization problems using Bregman distance. The proposed method is free from any kind of a priori chosen parameters or ordering information of objective functions. At every iteration of the proposed method, a subproblem is solved to find a descent direction. This … Read more

A unified analysis of descent sequences in weakly convex optimization, including convergence rates for bundle methods

We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a … Read more

A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization

We develop a trust-region method for minimizing the sum of a smooth term f and a nonsmooth term h, both of which can be nonconvex. Each iteration of our method minimizes apossibly nonconvex model of f+h in a trust region. The model coincides with f+h in value and subdifferential at the center. We establish global … Read more

A Penalty-free Infeasible Approach for a Class of Nonsmooth Optimization Problems over the Stiefel Manifold

Transforming into an exact penalty function model with convex compact constraints yields efficient infeasible approaches for optimization problems with orthogonality constraints. For smooth and L21-norm regularized cases, these infeasible approaches adopt simple and orthonormalization-free updating schemes and show high efficiency in some numerical experiments. However, to avoid orthonormalization while enforcing the feasibility of the final … Read more

On Inexact Accelerated Proximal Gradient Methods with Relative Error Rules

One of the most popular and important first-order iterations that provides optimal complexity of the classical proximal gradient method (PGM) is the “Fast Iterative Shrinkage/Thresholding Algorithm” (FISTA). In this paper, two inexact versions of FISTA for minimizing the sum of two convex functions are studied. The proposed schemes inexactly solve their subproblems by using relative … Read more