A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems

In this paper, a robust sequential quadratic programming method of Burke and Han (Math Programming, 1989)  for constrained optimization is generalized to problem with stochastic objective function, deterministic equality and inequality constraints. A stochastic line search scheme in Paquette and Scheinberg (SIOPT, 2020) is employed to globalize the steps. We show that in the case … Read more

A Quadratically Convergent Sequential Programming Method for Second-Order Cone Programs Capable of Warm Starts

We propose a new method for linear second-order cone programs. It is based on the sequential quadratic programming framework for nonlinear programming. In contrast to interior point methods, it can capitalize on the warm-start capabilities of active-set quadratic programming subproblem solvers and achieve a local quadratic rate of convergence. In order to overcome the non-differentiability … Read more

A filter sequential adaptive cubic regularisation algorithm for nonlinear constrained optimization

In this paper, we propose a filter sequential adaptive regularisation algorithm using cubics (ARC) for solving nonlinear equality constrained optimization. Similar to sequential quadratic programming methods, an ARC subproblem with linearized constraints is considered to obtain a trial step in each iteration. Composite step methods and reduced Hessian methods are employed to tackle the linearized … Read more

Inexact Sequential Quadratic Optimization for Minimizing a Stochastic Objective Function Subject to Deterministic Nonlinear Equality Constraints

An algorithm is proposed, analyzed, and tested experimentally for solving stochastic optimization problems in which the decision variables are constrained to satisfy equations defined by deterministic, smooth, and nonlinear functions. It is assumed that constraint function and derivative values can be computed, but that only stochastic approximations are available for the objective function and its … Read more

A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, … Read more

Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization

Sequential quadratic optimization algorithms are proposed for solving smooth nonlinear optimization problems with equality constraints. The main focus is an algorithm proposed for the case when the constraint functions are deterministic, and constraint function and derivative values can be computed explicitly, but the objective function is stochastic. It is assumed in this setting that it … Read more

A simple Newton method for local nonsmooth optimization

Superlinear convergence has been an elusive goal for black-box nonsmooth optimization. Even in the convex case, the subgradient method is very slow, and while some cutting plane algorithms, including traditional bundle methods, are popular in practice, local convergence is still sluggish. Faster variants depend either on problem structure or on analyses that elide sequences of … Read more

Solving Chance-Constrained Problems via a Smooth Sample-Based Nonlinear Approximation

We introduce a new method for solving nonlinear continuous optimization problems with chance constraints. Our method is based on a reformulation of the probabilistic constraint as a quantile function. The quantile function is approximated via a differentiable sample average approximation. We provide theoretical statistical guarantees of the approximation, and illustrate empirically that the reformulation can … Read more

A Dynamic Penalty Parameter Updating Strategy for Matrix-Free Sequential Quadratic Optimization

This paper focuses on the design of sequential quadratic optimization (commonly known as SQP) methods for solving large-scale nonlinear optimization problems. The most computationally demanding aspect of such an approach is the computation of the search direction during each iteration, for which we consider the use of matrix-free methods. In particular, we develop a method … Read more

A Sequential Algorithm for Solving Nonlinear Optimization Problems with Chance Constraints

An algorithm is presented for solving nonlinear optimization problems with chance constraints, i.e., those in which a constraint involving an uncertain parameter must be satisfied with at least a minimum probability. In particular, the algorithm is designed to solve cardinality-constrained nonlinear optimization problems that arise in sample average approximations of chance-constrained problems, as well as … Read more