Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms

I present a multilevel optimization approach (termed MG/Opt) for the solution of constrained optimization problems. The approach assumes that one has a hierarchy of models, ordered from fine to coarse, of an underlying optimization problem, and that one is interested in finding solutions at the finest level of detail. In this hierarchy of models calculations on coarser levels are less expensive, but also are of less fidelity, than calculations on finer levels. The intent of MG/Opt is to use calculations on coarser levels to accelerate the progress of the optimization on the finest level. Global convergence (i.e., convergence to a Karush-Kuhn-Tucker point from an arbitrary starting point) is ensured by requiring a single step of a convergent method on the finest level, plus a line-search for incorporating the coarse level corrections. The convergence results apply to a broad class of algorithms with minimal assumptions about the properties of the coarse models. I also analyze the descent properties of the algorithm, i.e., whether the coarse level correction is guaranteed to result in improvement of the fine level solution. Although additional assumptions are required to guarantee improvement, the assumptions required are likely to be satisfied by a broad range of optimization problems.

Citation

Technical Report, Systems Engineering and Operations Research Dept., George Mason University (April 2009). A revised version of this paper is: Stephen G. Nash, "Properties of a Class of Multilevel Optimization Algorithms for Equality-Constrained Problems", Optimization Methods and Software, to appear (2013): http://www.tandfonline.com/doi/abs/10.1080/10556788.2012.759571

Article

Download

View Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms