Nonsmooth optimization problems arising in practice, whether in signal processing, statistical estimation, or modern machine learning, tend to exhibit beneficial smooth substructure: their domains stratify into ``active manifolds" of smooth variation, which common proximal algorithms ``identify" in finite time. Identification then entails a transition to smooth dynamics, and permits the use of second-order information for acceleration. While identification is clearly useful algorithmically, empirical evidence suggests that even those algorithms that do not identify the active manifold in finite time---notably the subgradient method---are nonetheless affected by it. This work seeks to explain this phenomenon, asking: how do active manifolds impact the subgradient method in nonsmooth optimization? To answer this question, our approach posits two algorithmically useful properties that link the behavior of the function on and off the active manifold. The first, which we call {\em aiming}, asserts that subgradients point towards the manifold. This property ensures that, though identification fails, the subgradient iterates steadily approach the manifold. The second property states that subgradients on and off the manifold are close in tangent directions up to a linear error. This property ensures that the nonsmooth dynamics of the subgradient method are well-approximated by their smooth shadow along the manifold, with a controlled error. We show that these properties, while not automatic, hold for a wide class of problems, including cone reducible/decomposable functions and generic semialgebraic problems. Moreover, we develop a thorough calculus, proving such properties are preserved under smooth deformations and spectral lifts. We then turn to algorithmic consequences. Here, the two pillars---aiming and subgradient approximation---fully expose the smooth substructure of the problem, implying that the shadow of the (stochastic) subgradient method along the active manifold is precisely an inexact Riemannian gradient method with an implicit retraction. This viewpoint leads to several consequences that parallel results in smooth optimization, despite the nonsmoothness of the problem: local rates of convergence, asymptotic normality, and saddle point avoidance. The asymptotic normality results appear to be new even in the most classical setting of stochastic nonlinear programming. The results culminate in the following observation: the perturbed subgradient method on generic, Clarke regular semialgebraic problems, converges only to local minimizers.