Nonlinear Model Predictive Control with an Infinite Horizon Approximation

Current nonlinear model predictive control (NMPC) strategies are formulated as finite predictive horizon nonlinear programs (NLPs), which maintain NMPC stability and recursive feasibility through the construction of terminal cost functions and/or terminal constraints. However, computing these terminal properties may pose formidable challenges with a fixed horizon, particularly in the context of nonlinear dynamic processes. Motivated … Read more

Active-Set Identification in Noisy and Stochastic Optimization

Identifying active constraints from a point near an optimal solution is important both theoretically and practically in constrained continuous optimization, as it can help identify optimal Lagrange multipliers and essentially reduces an inequality-constrained problem to an equality-constrained one. Traditional active-set identification guarantees have been proved under assumptions of smoothness and constraint qualifications, and assume exact … Read more

AS-BOX: Additional Sampling Method for Weighted Sum Problems with Box Constraints

A class of optimization problems characterized by a weighted finite-sum objective function subject to box constraints is considered. We propose a novel stochastic optimization method, named AS-BOX (Additional Sampling for BOX constraints), that combines projected gradient directions with adaptive variable sample size strategies and nonmonotone line search. The method dynamically adjusts the batch size based … Read more

Active-set Newton-MR methods for nonconvex optimization problems with bound constraints

This paper presents active-set methods for minimizing nonconvex twice-continuously differentiable functions subject to bound constraints. Within the faces of the feasible set, we employ descent methods with Armijo line search, utilizing approximated Newton directions obtained through the Minimum Residual (MINRES) method. To escape the faces, we investigate the use of the Spectral Projected Gradient (SPG) … Read more

A user manual for cuHALLaR: A GPU accelerated low-rank semidefinite programming Solver

We present a Julia-based interface to the precompiled HALLaR and cuHALLaR binaries for large-scale semidefinite programs (SDPs). Both solvers are established as fast and numerically stable, and accept problem data in formats compatible with SDPA and a new enhanced data format taking advantage of Hybrid Sparse Low-Rank (HSLR) structure. The interface allows users to load … Read more

Polyconvex double well functions

We investigate polyconvexity of the double well function $f(X) := |X-X_1|^2|X-X_2|^2$ for given matrices $X_1, X_2 \in \R^{n \times n}$. Such functions are fundamental in the modeling of phase transitions in materials, but their non-convex nature presents challenges for the analysis of variational problems. We prove that $f$ is polyconvex if and only if the … Read more

On the boundedness of multipliers in augmented Lagrangian methods for mathematical programs with complementarity constraints

In this paper, we present a theoretical analysis of augmented Lagrangian (AL) methods applied to mathematical programs with complementarity constraints (MPCCs). Our focus is on a variant that reformulates the complementarity constraints using slack variables, where these constraints are handled directly in the subproblems rather than being penalized. We introduce specialized constraint qualifications (CQs) of … Read more

A First Order Algorithm on an Optimization Problem with Improved Convergence when Problem is Convex

We propose a first order algorithm, a modified version of FISTA, to solve an optimization problem with an objective function that is the sum of a possibly nonconvex function, with Lipschitz continuous gradient, and a convex function which can be nonsmooth. The algorithm is shown to have an iteration complexity of \(\mathcal{O}(\epsilon^{-2})\) to find an … Read more

rAdam: restart Adam method to escape from local minima for bound constrained non-linear optimization problems

This paper presents a restart version of the Adaptive Moment Estimation (Adam) method for bound constrained nonlinear optimization problems. It aims to avoid getting trapped in a local minima and enable exploration the global optimum. The proposed method combines an adapted restart strategy coupling with barrier methodology to handle the bound constraints. Computational comparison with … Read more

ASPEN: An Additional Sampling Penalty Method for Finite-Sum Optimization Problems with Nonlinear Equality Constraints

We propose a novel algorithm for solving non-convex, nonlinear equality-constrained finite-sum optimization problems. The proposed algorithm incorporates an additional sampling strategy for sample size update into the well-known framework of quadratic penalty methods. Thus, depending on the problem at hand, the resulting method may exhibit a sample size strategy ranging from a mini-batch on one … Read more