Finite-Sample Optimality and Constraint Satisfaction: Learning-Based Optimal Control in Dynamic Dispatch Networks

Dynamic dispatch networks in logistics and transportation require real-time, constraint-aware decision-making under stochastic demand. This paper bridges mathematical optimization, optimal control theory, and reinforcement learning by establishing non-asymptotic theoretical guarantees for learning-based optimal control in constrained stochastic dispatch systems. We formulate the problem as a constrained Markov decision process, enforce feasibility via a projection-based policy … Read more

Complexity of an inexact stochastic SQP algorithm for equality constrained optimization

In this paper, we consider nonlinear optimization problems with a stochastic objective function and deterministic equality constraints. We propose an inexact two-stepsize stochastic sequential quadratic programming (SQP) algorithm and analyze its worst-case complexity under mild assumptions. The method utilizes a step decomposition strategy and handles stochastic gradient estimates by assigning different stepsizes to different components … Read more

A Multi-Secant Limited-Memory BFGS Method

We develop multi-secant BFGS-like quasi-Newton updating scheme, which adaptively selects the number of imposed secant conditions and naturally preserves positivity of approximated Hessian. Compact representation and respective limited-memory formulation are also derived. Numerical stability is assured via unconventional damping technique, which symmetrically handles coordinate and gradient differences. Practical relevance of proposed method is demonstrated via … Read more

Global Optimization for Combinatorial Geometry Problems Revisited in the Era of LLMs

Recent progress in LLM-driven algorithm discovery, exemplified by DeepMind’s AlphaEvolve, has produced new best-known solutions for a range of hard geometric and combinatorial problems. This raises a natural question: to what extent can modern off-the-shelf global optimization solvers match such results when the problems are formulated directly as nonlinear optimization problems (NLPs)? We revisit a … Read more

A Proximal-Gradient Method for Solving Regularized Optimization Problems with General Constraints

We propose, analyze, and test a proximal-gradient method for solving regularized optimization problems with general constraints. The method employs a decomposition strategy to compute trial steps and uses a merit function to determine step acceptance or rejection. Under various assumptions, we establish a worst-case iteration complexity result, prove that limit points are first-order KKT points, … Read more

On constraint qualifications for lower-level sets and an augmented Lagrangian method

In this paper we consider an augmented Lagrangian method with general lower-level constraints, that is, where some of the constraints are penalized while others are kept as subproblem constraints. Motivated by some recent results on optimization problems on manifolds, we present a general theory of global convergence when a feasible approximate KKT point is found … Read more

Progressively Sampled Equality-Constrained Optimization

An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which … Read more

Retrospective Approximation Sequential Quadratic Programming for Stochastic Optimization with General Deterministic Nonlinear Constraints

In this paper, we propose a framework based on the Retrospective Approximation (RA) paradigm to solve optimization problems with a stochastic objective function and general nonlinear deterministic constraints. This framework sequentially constructs increasingly accurate approximations of the true problems which are solved to a specified accuracy via a deterministic solver, thereby decoupling the uncertainty from … Read more

Optimistic Noise-Aware Sequential Quadratic Programming for Equality Constrained Optimization with Rank-Deficient Jacobians

We propose and analyze a sequential quadratic programming algorithm for minimizing a noisy nonlinear smooth function subject to noisy nonlinear smooth equality constraints. The algorithm uses a step decomposition strategy and, as a result, is robust to potential rank-deficiency in the constraints, allows for two different step size strategies, and has an early stopping mechanism. … Read more

On liftings that improve convergence properties of Newton’s Method for Boundary Value Optimization Problems

The representation of a function in a higher-dimensional space is often referred to as lifting. Liftings can be used to reduce complexity. We are interested in the question of how liftings affect the local convergence of Newton’s method. We propose algorithms to construct liftings that potentially reduce the number of iterations via analysis of local … Read more