An incremental method for solving convex finite minmax problems

We introduce a new approach to minimizing a function defined as the pointwise maximum over finitely many convex real functions (next referred to as the "component functions"), with the aim of working on the basis of "incomplete knowledge" of the objective function. In fact, a descent algorithm is proposed which does not necessarily require at the current point the evaluation of the actual value of the objective function, i.e., of all the component functions, thus extending to minmax problems the philosophy of the incremental and the online approaches, widely adopted in the nonlinear least squares literature. Since the finite minmax optimization problem is of the nonsmooth type, we resort to the well established machinery of the "bundle methods." We provide global convergence analysis of our method and in addition we study a subgradient aggregation scheme which allows us to provide a version of the method where the problem of finding a tentative step is drastically simplified. The paper is completed by the numerical results obtained on a set of standard test problems.

Citation

Mathematics of Operations Research, 31(1), 173-187, 2006.