Subdifferentiation and Smoothing of Nonsmooth Integral Functionals

The subdifferential calculus for the expectation of nonsmooth random integrands involves many fundamental and challenging problems in stochastic optimization. It is known that for Clarke regular integrands, the Clarke subdifferential equals the expectation of their Clarke subdifferential. In particular, this holds for convex integrands. However, little is known about calculation of Clarke subgradients for the … Read more

Determining optimal locations for charging stations of electric car-sharing systems under stochastic demand

In this article, we introduce and study a two-stage stochastic optimization problem suitable to solve strategic optimization problems of car-sharing systems that utilize electric cars. By combining the individual advantages of car-sharing and electric vehicles, such electric car-sharing systems may help to overcome future challenges related to pollution, congestion, or shortage of fossil fuels. A … Read more

A randomized method for smooth convex minimization, motivated by probability maximization

We propose a randomized gradient method – or a randomized cutting-plane method from a dual viewpoint. From the primal viewpoint, our method bears a resemblance to the stochastic approximation family. But in contrast to stochastic approximation, the present method builds a model problem. Citation Kecskemet College, Pallasz Athene University. Izsaki ut 10, 6000 Kecskemet, Hungary; … Read more

A Stability Result for Linear Markov Decision Processes

In this paper, we propose a semi-metric for Markov processes that allows to bound optimal values of linear Markov Decision Processes (MDPs). Similar to existing notions of distance for general stochastic processes our distance is based on transportation metrics. Apart from the specialization to MDPs, our contribution is to make the distance problem specific, i.e., … Read more

Exploiting Negative Curvature in Deterministic and Stochastic Optimization

This paper addresses the question of whether it can be beneficial for an optimization algorithm to follow directions of negative curvature. Although some prior work has established convergence results for algorithms that integrate both descent and negative curvature directions, there has not yet been numerical evidence showing that such methods offer significant performance improvements. In … Read more

Regularized Stochastic Dual Dynamic Programming for convex nonlinear optimization problems

We define a regularized variant of the Dual Dynamic Programming algorithm called REDDP (REgularized Dual Dynamic Programming) to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the Stochastic Dual Dynamic Programming (SDDP) … Read more

On level regularization with normal solutions in decomposition methods for multistage stochastic programming problems

We consider well-known decomposition techniques for multistage stochastic programming and a new scheme based on normal solutions for stabilizing iterates during the solution process. The given algorithms combine ideas from finite perturbation of convex programs and level bundle methods to regularize the so-called forward step of these decomposition methods. Numerical experiments on a hydrothermal scheduling … Read more

Statistical inference and hypotheses testing of risk averse stochastic programs

We study statistical properties of the optimal value and optimal solutions of the Sample Average Approximation of risk averse stochastic problems. Central Limit Theorem type results are derived for the optimal value when the stochastic program is expressed in terms of a law invariant coherent risk measure having a discrete Kusuoka representation. The obtained results … Read more

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle ($\SFO$). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case … Read more

Algorithms for stochastic optimization with expectation constraints

This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with an expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the expectation constraint on devision variables. We show that … Read more