An algorithmic framework is presented for optimising general convex functions by non synchronised parallel processes. Each process greedily picks a suitable adaptive subset of coordinates and runs a bundle method on a corresponding restricted problem stopping whenever a descent step is encountered or predicted decrease is reduced sufficiently. No prior knowledge on the dependencies between variables is assumed. Instead, dependency information is collected automatically by analysing aggregate subgradient properties required for ensuring convergence. Within this framework three strategies are discussed for supporting varying scenarios of structural independence: a single convex function, the sum of partially separable convex functions, and a scenario tuned to problem decomposition by Lagrangian relaxation of packing type constraints. The theoretical framework presented here generalises a practical method proposed by the authors for Lagrangian relaxation of large scale packing problems and simplifies the analysis.
Citation
Preprint 2013-8, Fakultaet fuer Mathematik, Technische Universitaet Chemnitz, D-09107, July 2013