In large scale applications, deterministic and stochastic variants of Cauchy’s steepest descent method are widely used for the minimization of objectives that are only piecewise smooth. In this paper we analyse a deterministic descent method based on the generalization of rescaled conjugate gradients proposed by Philip Wolfe in 1975 for objectives that are convex. Without this assumption the new method exploits semismoothness to obtain pairs of directionally active generalized gradients such that it can only converge to Clarke stationary points. Numerical results illustrate the theoretical findings.