The paper presents a fully adaptive proximal extrapolated gradient method for monotone variational inequalities. The proposed method uses fully non-monotonic and adaptive step sizes, that are computed using two previous iterates as an approximation of the locally Lipschitz constant without running a linesearch. Thus, it has almost the same low computational cost as classic proximal gradient algorithm, each iteration requires only one evaluation of a monotone mapping and a proximal operator. The method exhibits an ergodic O(1/N) convergence rate and R-linear rate under a strong monotonicity assumption of the mapping. Applying the method to unconstrained optimization and fixed point problems, it is sufficient for convergence of iterates that the step sizes are estimated only by the local curvature of mapping, without any constraints on step size’s increasing rate. The numerical experiments illustrate the improvements in efficiency from the low computational cost and fully non-monotonic and adaptive step sizes.
Fully adaptive proximal extrapolated gradient method for monotone variational inequalities, Xiaokai Chang,2019