We propose a sigmoidal approximation (SigVaR) for the value-at-risk (VaR) and we use this approximation to tackle nonlinear programming problems (NLPs) with chance constraints. We prove that the approximation is conservative and that the level of conservatism can be made arbitrarily small for limiting parameter values. The SigVar approximation brings computational benefits over exact mixed-integer and difference of convex functions reformulations because its sample average approximation can be cast as a standard NLP. Unfortunately, as with any sigmoidal function, SigVaR becomes numerically unstable in the limit of its parameter values. To ameliorate this issue, we propose a scheme that solves a sequence of approximations of increasing quality. We also establish conditions under which SigVaR is less conservative than the well-known conditional value at risk (CVaR) and Bernstein approximations and we use this result to initialize the proposed scheme. We conduct small- and large-scale numerical studies to demonstrate the benefits and limitations of the proposed approximation.
Submitted to SIAM Journal on Optimization