A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results. The algorithm is unique from other interior-point methods for solving smooth (nonconvex) optimization problems since the search directions are computed using stochastic gradient estimates. It is also unique … Read more

Multi-model Partially Observable Markov Decision Processes

We propose a new multi-model partially observable Markov decision process (MPOMDP) model to address the issue of model ambiguity in partially observable Markov decision process. Here, model ambiguity is defined as the case where there are multiple credible optimization models with the same structure but different model parameters. The proposed MPOMDP model aims to learn … Read more

Sequential Quadratic Optimization for Stochastic Optimization with Deterministic Nonlinear Inequality and Equality Constraints

A sequential quadratic optimization algorithm for minimizing an objective function defined by an expectation subject to nonlinear inequality and equality constraints is proposed, analyzed, and tested. The context of interest is when it is tractable to evaluate constraint function and derivative values in each iteration, but it is intractable to evaluate the objective function or … Read more

Data-Driven Stochastic Dual Dynamic Programming: Performance Guarantees and Regularization Schemes

We propose a data-driven scheme for multistage stochastic linear programming with Markovian random parameters by extending the stochastic dual dynamic programming (SDDP) algorithm. In our data-driven setting, only a finite number of historical trajectories are available. The proposed SDDP scheme evaluates the cost-to-go functions only at the observed sample points, where the conditional expectations are … Read more

A Simplified Convergence Theory for Byzantine Resilient Stochastic Gradient Descent

In distributed learning, a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary), standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. In this paper, we present a simplified … Read more

Statistical performance of subgradient step-size update rules in Lagrangian relaxations of chance-constrained optimization models

Lagrangian relaxation schemes, coupled with a subgradient procedure, are frequently employed to solve chance-constrained optimization models. The subgradient procedure typically relies on a step-size update rule. Although there is extensive research on the properties of these step-size update rules, there is little consensus on which rules are most suited in practice. This is especially so … Read more

Efficient composite heuristics for integer bound constrained noisy optimization

This paper discusses a composite algorithm for bound constrained noisy derivative-free optimization problems with integer variables. This algorithm is an integer variant of the matrix adaptation evolution strategy. An integer derivative-free line search strategy along affine scaling matrix directions is used to generate candidate points. Each affine scaling matrix direction is a product of the … Read more

A Stochastic Optimization Approach to Energy-Efficient Underground Timetabling under Uncertain Dwell and Running Times

We consider a problem from the context of energy-efficient underground railway timetabling, in which an existing timetable draft is improved by slightly changing departure and running times. In practice, synchronization between accelerating and braking trains to utilize regenerative braking plays a major role for the energy-efficiency of a timetable. Since deviations from a planned timetable … Read more

An Adaptive Sampling Sequential Quadratic Programming Method for Equality Constrained Stochastic Optimization

This paper presents a methodology for using varying sample sizes in sequential quadratic programming (SQP) methods for solving equality constrained stochastic optimization problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the gradient in conjunction with inexact solutions to the SQP subproblems. Under reasonable … Read more

First- and Second-Order High Probability Complexity Bounds for Trust-Region Methods with Noisy Oracles

In this paper, we present convergence guarantees for a modified trust-region method designed for minimizing objective functions whose value is computed with noise and for which gradient and Hessian estimates are inexact and possibly random. In order to account for the noise, the method utilizes a relaxed step acceptance criterion and a cautious trust-region radius … Read more