An algorithm is proposed for solving optimization problems with
stochastic objective and deterministic equality and inequality
constraints. This algorithm is objective-function-free in the sense
that it only uses the objective’s gradient and never evaluates the function
value. It is based on an adaptive selection of function-decreasing and
constraint-improving iterations, the first ones using an Adagrad-type
stepsize. When applied to problems with full-rank Jacobian, the
combined primal-dual optimality measure is shown to decrease at the rate of
O(1/sqrt{k}), which is identical to the convergence rate of
first-order methods in the unconstrained case.