Isotonic Optimization with Fixed Costs

This paper introduces a generalized isotonic optimization framework over an arborescence graph, where each node incurs state-dependent convex costs and a fixed cost upon strict increases. We begin with the special case in which the arborescence is a path and develop a dynamic programming (DP) algorithm with an initial complexity of $O(n^3)$, which we improve … Read more

Alternating Iteratively Reweighted \(\ell_1\) and Subspace Newton Algorithms for Nonconvex Sparse Optimization

This paper presents a novel hybrid algorithm for minimizing the sum of a continuously differentiable loss function and a nonsmooth, possibly nonconvex, sparsity‑promoting regularizer. The proposed method adaptively switches between solving a reweighted \(\ell_1\)-regularized subproblem and performing an inexact subspace Newton step. The reweighted \(\ell_1\)-subproblem admits an efficient closed-form solution via the soft-thresholding operator, thereby … Read more

Nonlinear Model Predictive Control with an Infinite Horizon Approximation

Current nonlinear model predictive control (NMPC) strategies are formulated as finite predictive horizon nonlinear programs (NLPs), which maintain NMPC stability and recursive feasibility through the construction of terminal cost functions and/or terminal constraints. However, computing these terminal properties may pose formidable challenges with a fixed horizon, particularly in the context of nonlinear dynamic processes. Motivated … Read more

Active-Set Identification in Noisy and Stochastic Optimization

Identifying active constraints from a point near an optimal solution is important both theoretically and practically in constrained continuous optimization, as it can help identify optimal Lagrange multipliers and essentially reduces an inequality-constrained problem to an equality-constrained one. Traditional active-set identification guarantees have been proved under assumptions of smoothness and constraint qualifications, and assume exact … Read more

AS-BOX: Additional Sampling Method for Weighted Sum Problems with Box Constraints

A class of optimization problems characterized by a weighted finite-sum objective function subject to box constraints is considered. We propose a novel stochastic optimization method, named AS-BOX (Additional Sampling for BOX constraints), that combines projected gradient directions with adaptive variable sample size strategies and nonmonotone line search. The method dynamically adjusts the batch size based … Read more

Asynchronous Adaptive Gradient Tracking Methods for Distributed Stochastic Optimization Problems with Decision-dependent Distributions

This paper proposes a distributed asynchronous adaptive gradient tracking method, DASYAGT, to solve the distributed stochastic optimization problems with decision-dependent distributions over directed graphs. DASYAGT employs the local adaptive gradient to estimate the gradient of the objective function and introduces the auxiliary running-sum variable to handle asynchrony. We show that the iterates generated by DASYAGT … Read more

Gradient Tracking Methods for Distributed Stochastic Optimization Problems with Decision-dependent Distributions

This paper aims to seek the performative stable solution and the optimal solution of the distributed stochastic optimization problem with decision-dependent distributions, which is a finite-sum stochastic optimization problem over a network and the distribution depends on the decision variables. For the performative stable solution, we provide an algorithm, DSGTD-GD, which combines the distributed stochastic … Read more

Greedy Algorithms with Imprecise Oracles for Submodular Knapsack Problems

We consider the problem of maximizing a monotone increasing, normalized, and submodular function defined on a set of weighted items under a knapsack constraint. A well-known greedy algorithm, analyzed by Wolsey (1982), achieves an approximation factor of \(0.357\) for this problem. This greedy algorithm starts with an empty solution set and iteratively generates a feasible … Read more

Active-set Newton-MR methods for nonconvex optimization problems with bound constraints

This paper presents active-set methods for minimizing nonconvex twice-continuously differentiable functions subject to bound constraints. Within the faces of the feasible set, we employ descent methods with Armijo line search, utilizing approximated Newton directions obtained through the Minimum Residual (MINRES) method. To escape the faces, we investigate the use of the Spectral Projected Gradient (SPG) … Read more

Combining Simulation with Machine Learning and Optimization to Assess Green Hydrogen Production via Offshore Wind in the Dutch North Sea

As countries seek to decarbonize their energy systems, green hydrogen has emerged as a promising energy carrier. This paper studies the production of green hydrogen from offshore wind in the Dutch North Sea, with particular emphasis on understanding how system design decisions and uncertain parameters affect key performance indicators. The analysis builds on a detailed … Read more