We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem. We introduce a gradient-based method, called the minimal norm gradient method, for solving this class of problems, and establish the convergence of the sequence generated by the algorithm as well as a rate of convergence of the sequence of function values. A portfolio optimization example is given in order to illustrate our results.
Article
View A First Order Method for Finding Minimal Norm-Like Solutions of Convex Optimization Problems