A unified framework for first-order optimization algorithms for nonconvex unconstrained optimization is proposed that uses adaptively preconditioned gradients and includes popular methods such as full and diagonal AdaGrad, AdaNorm, as well as adpative variants of Shampoo and Muon. This framework also allows combining heterogeneous geometries across different groups of variables while preserving a unified convergence analysis. A fully stochastic global rate-of-convergence analysis is conducted for all methods in the framework, with and without two types of momentum, using reasonable assumptions on the variance of the gradient oracle and without assuming bounded stochastic gradients or small enough stepsize.