This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the iterates is progressively enforced thanks to shift variables and an exact penalty approach. Global and $q$-superlinear convergence is obtained for a fixed penalty parameter; global convergence to the analytic center of the optimal set is ensured when the barrier parameter tends to zero, provided strict complementarity holds.
Citation
RR-4087, INRIA Rocquencourt, BP 105, 78153 Le Chesnay Cedex (France), December 2000