A Quasi-Newton Algorithm for Optimal Discretization of Markov Processes

In stochastic programming and stochastic-dynamic programming discretization of random model parameters is often unavoidable. We propose a quasi-Newton learning algorithm to discretize multi-dimensional, continuous discrete-time Markov processes to scenario lattices by minimizing the Wasserstein distance between the unconditional distributions of process and lattice. Scenario lattices enable accurate discretization of the conditional distributions of Markov processes … Read more

Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates

In this paper, we study smooth stochastic multi-level composition optimization problems, where the objective function is a nested composition of $T$ functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze … Read more

Accuracy and fairness trade-offs in machine learning: A stochastic multi-objective approach

In the application of machine learning to real life decision-making systems, e.g., credit scoring and criminal justice, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness. The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction … Read more

Penalized stochastic gradient methods for stochastic convex optimization with expectation constraints

Stochastic gradient method and its variants are simple yet effective for minimizing an expectation function over a closed convex set. However, none of these methods are applicable to solve stochastic programs with expectation constraints, since the projection onto the feasible set is prohibitive. To deal with the expectation constrained stochastic convex optimization problems, we propose … Read more

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

We provide non-asymptotic convergence rates of the Polyak-Ruppert averaged stochastic gradient descent (SGD) to a normal random vector for a class of twice-differentiable test functions. A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i.e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, … Read more

A Single Time-Scale Stochastic Approximation Method for Nested Stochastic Optimization

We study constrained nested stochastic optimization problems in which the objective function is a composition of two smooth functions whose exact values and derivatives are not available. We propose a single time-scale stochastic approximation algorithm, which we call the Nested Averaged Stochastic Approximation (NASA), to find an approximate stationary point of the problem. The algorithm … Read more

A stochastic approximation method for approximating the efficient frontier of chance-constrained nonlinear programs

We propose a stochastic approximation method for approximating the efficient frontier of chance-constrained nonlinear programs. Our approach is based on a bi-objective viewpoint of chance-constrained programs that seeks solutions on the efficient frontier of optimal objective value versus risk of constraint violation. To this end, we construct a reformulated problem whose objective is to minimize … Read more

A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization

In this work, we present a globalized stochastic semismooth Newton method for solving stochastic optimization problems involving smooth nonconvex and nonsmooth convex terms in the objective function. We assume that only noisy gradient and Hessian information of the smooth part of the objective function is available via calling stochastic first and second order oracles. The … Read more

Dynamic Stochastic Approximation for Multi-stage Stochastic Optimization

In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solving these types of stochastic optimization problems. We show that DSA can achieve an optimal ${\cal O}(1/\epsilon^4)$ rate of convergence in terms … Read more

Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Running Time

We propose a randomized linear programming algorithm for approximating the optimal policy of the discounted Markov decision problem. By leveraging the value-policy duality, the algorithm adaptively samples state transitions and makes exponentiated primal-dual updates. We show that it finds an ε-optimal policy using nearly-linear running time in the worst case. For Markov decision processes that … Read more