A homotopy for the reliable estimation of model parameters in chromatography processes

Mathematical modeling, simulation, and optimization can significantly support the development and characterization of chromatography steps in the biopharmaceutical industry. Particularly mechanistic models become preferably used, as these models, once carefully calibrated, can be employed for a reliable optimization. However, model calibration is a difficult task in this context due to high correlations between parameters, highly … Read more

Global convergence of an augmented Lagrangian method for nonlinear programming via Riemannian optimization

Considering a standard nonlinear programming problem, one may view a subset of the equality constraints as an embedded Riemannian manifold. In this paper we investigate the differences between the Euclidean and the Riemannian approach for this problem. It is well known that the linear independence constraint qualification for both approaches are equivalent. However, when considering … Read more

Second-Order Contingent Derivatives: Computation and Application

It is known that second-order (Studniarski) contingent derivatives can be used to compute tangents to the solution set of a generalized equation when standard (first-order) regularity conditions are absent, but relaxed (second-order) regularity conditions are fulfilled. This fact, roughly speaking, is only relevant in practice as long as the computation of second-order contingent derivatives itself … Read more

A Unified Funnel Restoration SQP Algorithm

We consider nonlinearly constrained optimization problems and discuss a generic double-loop framework consisting of four algorithmic ingredients that unifies a broad range of nonlinear optimization solvers. This framework has been implemented in the open-source solver Uno, a Swiss-army knife-like C++ optimization framework that unifies many nonlinearly constrained nonconvex optimization solvers. We illustrate the framework with … Read more

Probabilistic Iterative Hard Thresholding for Sparse Learning

For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to the sample size, finding hidden sparsity in the ground truth can be critical in formulating an accurate statistical model. The so-called “l0 norm”, which counts the number of non-zero components in a vector, is a strong reliable mechanism of enforcing … Read more

A Two Stepsize SQP Method for Nonlinear Equality Constrained Stochastic Optimization

We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing a stochastic objective function subject to deterministic equality constraints. The method utilizes two different stepsizes, one which exclusively scales the component of the step corrupted by the variance of the stochastic gradient estimates and a second which scales the entire step. We prove that this … Read more

Single-Loop Deterministic and Stochastic Interior-Point Algorithms for Nonlinearly Constrained Optimization

An interior-point algorithm framework is proposed, analyzed, and tested for solving nonlinearly constrained continuous optimization problems. The main setting of interest is when the objective and constraint functions may be nonlinear and/or nonconvex, and when constraint values and derivatives are tractable to compute, but objective function values and derivatives can only be estimated. The algorithm … Read more

Fast Unconstrained Optimization via Hessian Averaging and Adaptive Gradient Sampling Methods

We consider minimizing finite-sum and expectation objective functions via Hessian-averaging based subsampled Newton methods. These methods allow for gradient inexactness and have fixed per-iteration Hessian approximation costs. The recent work (Na et al. 2023) demonstrated that Hessian averaging can be utilized to achieve fast \(\mathcal{O}\left(\sqrt{\frac{\log k}{k}}\right)\) local superlinear convergence for strongly convex functions in high … Read more

Refining asymptotic complexity bounds for nonconvex optimization methods, including why steepest descent is o(eps^{-2}) rather than O(eps^{-2})

We revisit the standard “telescoping sum” argument ubiquitous in the final steps of analyzing evaluation complexity of algorithms for smooth nonconvex optimization, and obtain a refined formulation of the resulting bound as a function of the requested accuracy eps. While bounds obtained using the standard argument typically are of the form \(O(\epsilon^{-\alpha})\) for some positive … Read more

The Rectangular Spiral or the n_1 × n_2 × · · · × n_k Points Problem

A generalization of Ripà’s square spiral solution for the n × n × ··· × n Points Upper Bound Problem. Additionally, we provide a non-trivial lower bound for the k-dimensional n_1 × n_2 × ··· × n_k Points Problem. In this way, we can build a range in which, with certainty, all the best possible … Read more