Direct search based on probabilistic descent

Direct-search methods are a class of popular derivative-free algorithms characterized by evaluating the objective function using a step size and a number of (polling) directions. When applied to the minimization of smooth functions, the polling directions are typically taken from positive spanning sets which in turn must have at least n+1 vectors in an n-dimensional … Read more

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

We consider the problem of minimizing the sum of two convex functions: one is the average of a large number of smooth component functions, and the other is a general convex function that admits a simple proximal mapping. We assume the whole objective function is strongly convex. Such problems often arise in machine learning, known … Read more

An Improved Stochastic Optimization Model for Water Supply Pumping Systems in Urban Networks

This study investigates a pump scheduling problem for the collection, transfer and storage of water in water supply systems in urban networks. The objective of this study is to determine a method to minimize the electricity costs associated with pumping operations. To address the dynamic and random nature of water demand, we propose a two-stage … Read more

Decomposition Algorithms for Two-Stage Chance-Constrained Programs

We study a class of chance-constrained two-stage stochastic optimization problems where second-stage feasible recourse decisions incur additional cost. In addition, we propose a new model, where “recovery” decisions are made for the infeasible scenarios to obtain feasible solutions to a relaxed second-stage problem. We develop decomposition algorithms with specialized optimality and feasibility cuts to solve … Read more

Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse

With stochastic integer programming as the motivating application, we investigate techniques to use integrality constraints to obtain improved cuts within a Benders decomposition algorithm. We compare the effect of using cuts in two ways: (i) cut-and-project, where integrality constraints are used to derive cuts in the extended variable space, and Benders cuts are then used … Read more

Chance-constrained problems and rare events: an importance sampling approach

We study chance-constrained problems in which the constraints involve the probability of a rare event. We discuss the relevance of such problems and show that the existing sampling-based algorithms cannot be applied directly in this case, since they require an impractical number of samples to yield reasonable solutions. Using a Sample Average Approximation (SAA) approach … Read more

Applying oracles of on-demand accuracy in two-stage stochastic programming – a computational study

Traditionally, two variants of the L-shaped method based on Benders’ decomposition principle are used to solve two-stage stochastic programming problems: the single-cut and the multi-cut version. The concept of an oracle with on-demand accuracy was originally proposed in the context of bundle methods for unconstrained convex optimzation to provide approximate function data and subgradients. In … Read more

Decomposition Algorithm for Optimizing Multi-server Appointment Scheduling with Chance Constraints

We schedule appointments with random service durations on multiple servers with operating time limits. We minimize the costs of operating servers and serving appointments, subject to a joint chance constraint limiting the risk of server overtime. Using finite samples of the uncertainty, we formulate the problem as a mixed-integer linear program, and propose a two-stage … Read more

Optimization over the Pareto Outcome set associated with a Convex Bi-Objective Optimization Problem: Theoretical Results, Deterministic Algorithm and Application to the Stochastic case

Our paper consists of two main parts. In the first one, we deal with the deterministic problem of minimizing a real valued function $f$ over the Pareto set associated with a deterministic convex bi-objective optimization problem (BOP), in the particular case where $f$ depends on the objectives of (BOP), i.e. we optimize over the Pareto … Read more

A Stochastic Quasi-Newton Method for Large-Scale Optimization

Abstract The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasi- Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust … Read more