Some Primal-Dual Theory for Subgradient Methods for Strongly Convex Optimization

We consider (stochastic) subgradient methods for strongly convex but potentially nonsmooth non-Lipschitz optimization. We provide new equivalent dual descriptions (in the style of dual averaging) for the classic subgradient method, the proximal subgradient method, and the switching subgradient method. These equivalences enable $O(1/T)$ convergence guarantees in terms of both their classic primal gap and a … Read more

A Riemannian smoothing steepest descent method for non-Lipschitz optimization on submanifolds

In this paper, we propose a Riemannian smoothing steepest descent method to minimize a nonconvex and non-Lipschitz function on submanifolds. The generalized subdifferentials on Riemannian manifold and the Riemannian gradient sub-consistency are defined and discussed. We prove that any accumulation point of the sequence generated by the Riemannian smoothing steepest descent method is a stationary … Read more

Partially separable convexly-constrained optimization with non-Lipschitz singularities and its complexity

An adaptive regularization algorithm using high-order models is proposed for partially separable convexly constrained nonlinear optimization problems whose objective function contains non-Lipschitzian $\ell_q$-norm regularization terms for $q\in (0,1)$. It is shown that the algorithm using an $p$-th order Taylor model for $p$ odd needs in general at most $O(\epsilon^{-(p+1)/p})$ evaluations of the objective function and … Read more