Adaptive Sampling Strategies for Stochastic Optimization

In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations. Unlike other variance reduction techniques that either require additional storage or the regular computation of full gradients, the proposed method reduces variance by increasing the sample size as needed. The decision to increase … Read more

From Estimation to Optimization via Shrinkage

We study a class of quadratic stochastic programs where the distribution of random variables has unknown parameters. A traditional approach is to estimate the parameters using a maximum likelihood estimator (MLE) and to use this as input in the optimization problem. For the unconstrained case, we show that an estimator that “shrinks” the MLE towards … Read more

Optimizing power generation in the presence of micro-grids

In this paper we consider energy management optimization problems in a future wherein an interaction with micro-grids has to be accounted for. We will model this interaction through a set of contracts between the generation companies owning centralized assets and the micro-grids. We will formulate a general stylized model that can, in principle, account for … Read more

The Adaptive Sampling Gradient Method: Optimizing Smooth Functions with an Inexact Oracle

Consider settings such as stochastic optimization where a smooth objective function $f$ is unknown but can be estimated with an \emph{inexact oracle} such as quasi-Monte Carlo (QMC) or numerical quadrature. The inexact oracle is assumed to yield function estimates having error that decays with increasing oracle effort. For solving such problems, we present the Adaptive … Read more

Subdifferentiation and Smoothing of Nonsmooth Integral Functionals

The subdifferential calculus for the expectation of nonsmooth random integrands involves many fundamental and challenging problems in stochastic optimization. It is known that for Clarke regular integrands, the Clarke subdifferential equals the expectation of their Clarke subdifferential. In particular, this holds for convex integrands. However, little is known about calculation of Clarke subgradients for the … Read more

Determining optimal locations for charging stations of electric car-sharing systems under stochastic demand

In this article, we introduce and study a two-stage stochastic optimization problem suitable to solve strategic optimization problems of car-sharing systems that utilize electric cars. By combining the individual advantages of car-sharing and electric vehicles, such electric car-sharing systems may help to overcome future challenges related to pollution, congestion, or shortage of fossil fuels. A … Read more

A randomized method for smooth convex minimization, motivated by probability maximization

We propose a randomized gradient method – or a randomized cutting-plane method from a dual viewpoint. From the primal viewpoint, our method bears a resemblance to the stochastic approximation family. But in contrast to stochastic approximation, the present method builds a model problem. Citation Kecskemet College, Pallasz Athene University. Izsaki ut 10, 6000 Kecskemet, Hungary; … Read more

A Stability Result for Linear Markov Decision Processes

In this paper, we propose a semi-metric for Markov processes that allows to bound optimal values of linear Markov Decision Processes (MDPs). Similar to existing notions of distance for general stochastic processes our distance is based on transportation metrics. Apart from the specialization to MDPs, our contribution is to make the distance problem specific, i.e., … Read more

Exploiting Negative Curvature in Deterministic and Stochastic Optimization

This paper addresses the question of whether it can be beneficial for an optimization algorithm to follow directions of negative curvature. Although some prior work has established convergence results for algorithms that integrate both descent and negative curvature directions, there has not yet been numerical evidence showing that such methods offer significant performance improvements. In … Read more

Regularized Stochastic Dual Dynamic Programming for convex nonlinear optimization problems

We define a regularized variant of the Dual Dynamic Programming algorithm called REDDP (REgularized Dual Dynamic Programming) to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the Stochastic Dual Dynamic Programming (SDDP) … Read more