A new algorithm for smooth constrained optimization is proposed that never computes the value of the problem’s objective function and that handles both equality and inequality constraints. The algorithm uses an adaptive switching strategy between a normal step aiming at reducing constraint’s infeasibility and a tangential step improving dual optimality, the latter being inspired by the AdaGrad-norm method. Its worst-case iteration complexity is analyzed, showing that the norm of the gradients generated converges to zero like O(1/\sqrt{k+1})$for problems with full-rank Jacobians. Numerical experiments show that the algorithm’s performance is remarkably
insensitive to noise in the objective function’s gradient.