On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators

Recently, Nemirovski’s analysis indicates that the extragradient method has the $O(1/t)$ convergence rate for variational inequalities with Lipschitz continuous monotone operators. For the same problems, in the last decades, we have developed a class of Fej\’er monotone projection and contraction methods. Until now, only convergence results are available to these projection and contraction methods, though … Read more

Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema

The asymptotic behavior of stochastic gradient algorithms is studied. Relying on some results of differential geometry (Lojasiewicz gradient inequality), the almost sure point-convergence is demonstrated and relatively tight almost sure bounds on the convergence rate are derived. In sharp contrast to all existing result of this kind, the asymptotic results obtained here do not require … Read more

Approximate Primal Solutions and Rate Analysis in Dual Subgradient Methods

We study primal solutions obtained as a by-product of subgradient methods when solving the Lagrangian dual of a primal convex constrained optimization problem (possibly nonsmooth). The existing literature on the use of subgradient methods for generating primal optimal solutions is limited to the methods producing such solutions only asymptotically (i.e., in the limit as the … Read more

A New Conjugate Gradient Algorithm Incorporating Adaptive Ellipsoid Preconditioning

The conjugate gradient (CG) algorithm is well-known to have excellent theoretical properties for solving linear systems of equations $Ax = b$ where the $n\times n$ matrix $A$ is symmetric positive definite. However, for extremely ill-conditioned matrices the CG algorithm performs poorly in practice. In this paper, we discuss an adaptive preconditioning procedure which improves the … Read more

Convergence rate estimates for the gradient differential inclusion

Let f be a lower semi-continuous convex function in a Euclidean space, finite or infinite dimensional. The gradient differential inclusion is a generalized differential equation which requires that -x'(t) be in the subgradient of f at x. It is the continuous versions of the gradient descent method for minimizing f in case f is differentiable, … Read more