Golden Ratio Algorithms for Variational Inequalities

The paper presents a fully explicit algorithm for monotone variational inequalities. The method uses variable stepsizes that are computed using two previous iterates as an approximation of the local Lipschitz constant without running a linesearch. Thus, each iteration of the method requires only one evaluation of a monotone operator $F$ and a proximal mapping $g$. The operator $F$ need not be Lipschitz-continuous, which also makes the algorithm interesting in the area of composite minimization where one cannot use the descent lemma. The method exhibits an ergodic $O(1/k)$ convergence rate and $R$-linear rate, if $F, g$ satisfy the error bound condition. We discuss possible applications of the method to fixed point problems. Furthermore, we show theoretically that the method still converges under a new relaxed monotonicity condition and confirm numerically that it can robustly work even for some highly nonmonotone/nonconvex problems.

Article

Download

View Golden Ratio Algorithms for Variational Inequalities