Two algorithms are proposed, analyzed, and tested for solving continuous optimization problems with nonlinear equality constraints. Each is an extension of a stochastic momentum-based method from the unconstrained setting to the setting of a stochastic Newton-SQP-type algorithm for solving equality-constrained problems. One is an extension of the heavy-ball method and the other is an extension of the Adam optimization method. Convergence guarantees for the algorithms for the constrained setting are provided that are on par with state-of-the-art guarantees for their unconstrained counterparts. A critical feature of each extension is that the momentum terms are implemented with projected gradient estimates, rather than with the gradient estimates themselves. The significant practical effect of this choice is seen in an extensive set of numerical experiments on solving informed supervised machine learning problems. These experiments also show benefits of employing a constrained approach to supervised machine learning rather than a typical regularization-based approach.