We consider the problem of identifying the solution(s) to an optimization problem whose domain is a subset of the integer lattice, and whose objective and constraint functions can only be observed using a stochastic simulation. Such problems seem particularly prevalent (see www.simopt.org) within service systems having capacity or service-level constraints. We present cgR-SPLINE --- a random restarts algorithm that repeatedly executes a gradient-based simulation optimization (SO) routine on strategically relaxed sample-path problems, to return a sequence of local solution estimators at increasing precision; the local solution estimators are probabilistically compared to update an incumbent solution sequence that estimates the global minimum. Four issues are salient. (i) Solutions with binding stochastic constraints render naive sample-average approximation inconsistent; consistency in cgR-SPLINE is guaranteed through sequential relaxation of the stochastic constraints. (ii) Light-tailed convergence that is characteristic of SO problems on unconstrained discrete spaces seems to be weakened here; the general convergence rate is shown to be sub-exponential. (iii) An exploration-exploitation characterization demonstrates that cgR-SPLINE achieves the fastest convergence rate when the number of restarts is proportional to simulation budget per restart; this is in contrast with the continuous context where much less exploration has been prescribed. (iv) Certain heuristics on choosing constraint relaxations, solution reporting, and premature stopping are important to ensure that cgR-SPLINE exhibits good finite-time performance while retaining asymptotic properties. We demonstrate cgR-SPLINE using three examples, two of which are nontrivial.
Citation
Under review with Operations Research. Submission date: August 2014