Global convergence of a second-order augmented Lagrangian method under an error bound condition

This work deals with convergence to points satisfying the weak second-order necessary optimality conditions of a second-order safeguarded augmented Lagrangian method from the literature. To this end, we propose a new second-order sequential optimality condition that is, in a certain way, based on the iterates generated by the algorithm itself. This also allows us to … Read more

On the Computation of Restricted Normal Cones

Restricted normal cones are of interest, for instance, in the theory of local error bounds, where they have recently been used to characterize the exis- tence of a constrained Lipschitzian error bound. In this paper, we establish rela- tions between two concepts for restricted normals. The first of these concepts was introduced in the late … Read more

RSG: Beating Subgradient Method without Smoothness and Strong Convexity

In this paper, we study the efficiency of a {\bf R}estarted {\bf S}ub{\bf G}radient (RSG) method that periodically restarts the standard subgradient method (SG). We show that, when applied to a broad class of convex optimization problems, RSG method can find an $\epsilon$-optimal solution with a low complexity than SG method. In particular, we first … Read more