Stronger cuts for Benders’ decomposition for stochastic Unit Commitment Problems based on interval variables

The Stochastic Unit Commitment (SUC) problem models the scheduling of power generation units under uncertainty, typically using a two-stage stochastic program with integer first-stage and continuous second-stage variables. We propose a new Benders decomposition approach that leverages an extended formulation based on interval variables, enabling decomposition by both unit and time interval under mild technical … Read more

Optimal participation of energy communities in electricity markets under uncertainty. A multi-stage stochastic programming approach

We propose a multi-stage stochastic programming model for the optimal participation of energy communities in electricity markets. The multi-stage aspect captures the different times at which variable renewable generation and electricity prices are observed. This results in large-scale optimization problem instances containing large scenario trees with 34 stages, to which scenario reduction techniques are applied. … Read more

A Minimalist Bayesian Framework for Stochastic Optimization

The Bayesian paradigm offers principled tools for sequential decision-making under uncertainty, but its reliance on a probabilistic model for all parameters can hinder the incorporation of complex structural constraints. We introduce a minimalist Bayesian framework that places a prior only on the component of interest, such as the location of the optimum. Nuisance parameters are … Read more

A linesearch-based derivative-free method for noisy black-box problems

In this work we consider unconstrained optimization problems. The objective function is known through a zeroth order stochastic oracle that gives an estimate of the true objective function. To solve these problems, we propose a derivativefree algorithm based on extrapolation techniques. Under reasonable assumptions we are able to prove convergence properties for the proposed algorithms. … Read more

On the Convergence and Complexity of Proximal Gradient and Accelerated Proximal Gradient Methods under Adaptive Gradient Estimation

In this paper, we propose a proximal gradient method and an accelerated proximal gradient method for solving composite optimization problems, where the objective function is the sum of a smooth and a convex, possibly nonsmooth, function. We consider settings where the smooth component is either a finite-sum function or an expectation of a stochastic function, … Read more

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints

In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an \(\epsilon\)-expectedly feasible stochastic optimal solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance ϵ. However, in many practical applications, constraints must be nearly … Read more

Retrospective Approximation Sequential Quadratic Programming for Stochastic Optimization with General Deterministic Nonlinear Constraints

In this paper, we propose a framework based on the Retrospective Approximation (RA) paradigm to solve optimization problems with a stochastic objective function and general nonlinear deterministic constraints. This framework sequentially constructs increasingly accurate approximations of the true problems which are solved to a specified accuracy via a deterministic solver, thereby decoupling the uncertainty from … Read more

On Multidimensonal Disjunctive Inequalities for Chance-Constrained Stochastic Problems with Finite Support

We consider mixed-integer linear chance-constrained problems for which the random vector that parameterizes the feasible region has finite support. Our key objective is to improve branch-and-bound or -cut approaches by introducing new types of valid inequalities that improve the dual bounds and, by this, the overall performance of such methods. We introduce so-called primal-dual as … Read more

Variance-reduced first-order methods for deterministically constrained stochastic nonconvex optimization with strong convergence guarantees

In this paper, we study a class of deterministically constrained stochastic optimization problems. Existing methods typically aim to find an \(\epsilon\)-stochastic stationary point, where the expected violations of both constraints and first-order stationarity are within a prescribed accuracy \(\epsilon\). However, in many practical applications, it is crucial that the constraints be nearly satisfied with certainty, … Read more

A Two Stepsize SQP Method for Nonlinear Equality Constrained Stochastic Optimization

We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing a stochastic objective function subject to deterministic equality constraints. The method utilizes two different stepsizes, one which exclusively scales the component of the step corrupted by the variance of the stochastic gradient estimates and a second which scales the entire step. We prove that this … Read more