Large-scale constrained convex optimization problems arise in several application domains. First-order methods are good candidates to tackle such problems due to their low iteration complexity and memory requirement. The level-set framework extends the applicability of first-order methods to tackle problems with complicated convex objectives and constraint sets. Current methods based on this framework either rely on the solution of challenging subproblems or do not guarantee a feasible solution, especially if the procedure is terminated before convergence. We develop a level-set method that finds an $\epsilon$-relative optimal and feasible solution to a constrained convex optimization problem with a fairly general objective function and set of constraints, maintains a feasible solution at each iteration, and only relies on calls to first-order oracles. We establish the iteration complexity of our approach, also accounting for the smoothness and strong convexity of the objective function and constraints when these properties hold. The dependence of our complexity on $\epsilon$ is similar to the analogous dependence in the unconstrained setting, which is not known to be true for level-set methods in the literature. Nevertheless, ensuring feasibility is not free. The iteration complexity of our method depends on a condition number, while existing level-set methods that do not guarantee feasibility can avoid such dependence. We numerically validate the usefulness of ensuring a feasible solution path by comparing our approach with an existing level set method on a Neyman-Pearson classification problem.
Technical report of University of Iowa and University of Illinois at Chicago, October, 2017.