Let f be a lower semi-continuous convex function in a Euclidean space, finite or infinite dimensional. The gradient differential inclusion is a generalized differential equation which requires that -x'(t) be in the subgradient of f at x. It is the continuous versions of the gradient descent method for minimizing f in case f is differentiable, and the proximal point method in the general case. There is a remarkable similarity between the behavior of the inclusion and its discrete counterparts as well in the methods used in both cases. As a simple consequence of our previous results on the proximal point method, we prove global convergence rate estimates for the optimality gap f(x(t))-min f=O(1/t) or o(1/t).
Citation
Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD 21250. November, 2003.
Article
View Convergence rate estimates for the gradient differential inclusion