Some new accelerated and stochastic gradient descent algorithms based on locally Lipschitz gradient constants

In this paper, we revisit the recent stepsize applied for the gradient descent scheme which is called NGD proposed by [Hoai et al., A novel stepsize for gradient descent method, Operations Research Letters (2024) 53, doi: 10.1016/j.orl.2024.107072]. We first investigate NGD stepsize with two well-known accelerated techniques which are Heavy ball and Nesterov’s methods. In … Read more

Composite optimization models via proximal gradient method with a novel enhanced adaptive stepsize

We first consider the convex composite optimization models with the local Lipschitzness condition imposed on the gradient of the differentiable term. The classical proximal gradient method will be studied with our novel enhanced adaptive stepsize selection. To obtain the convergence of the proposed algorithm, we establish a sufficient decrease type inequality associated with our new … Read more

A new proximal gradient algorithm for solving mixed variational inequality problems with a novel explicit stepsize and applications

In this paper, we propose a new algorithm for solving monotone mixed variational inequality problems in real Hilbert spaces based on proximal gradient method. Our new algorithm uses a novel explicit stepsize which is proved to be increasing to a positive value. This property plays an important role in improving the speed of the algorithm. … Read more

DC programming approach for solving a class of bilevel partial facility interdiction problems

We propose a new approach based DC programming for fnding a solution of the partial facility interdiction problem that belongs to the class of bilevel programming. This model was frst considered in the work of Aksen et al. [1] with a heuristic algorithm named multi-start simplex search (MSS). However, because of the big number of … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more