Probabilistic Iterative Hard Thresholding for Sparse Learning

For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to the sample size, finding hidden sparsity in the ground truth can be critical in formulating an accurate statistical model. The so-called “l0 norm”, which counts the number of non-zero components in a vector, is a strong reliable mechanism of enforcing … Read more

A Two Stepsize SQP Method for Nonlinear Equality Constrained Stochastic Optimization

We develop a Sequential Quadratic Optimization (SQP) algorithm for minimizing a stochastic objective function subject to deterministic equality constraints. The method utilizes two different stepsizes, one which exclusively scales the component of the step corrupted by the variance of the stochastic gradient estimates and a second which scales the entire step. We prove that this … Read more

Single-Loop Deterministic and Stochastic Interior-Point Algorithms for Nonlinearly Constrained Optimization

An interior-point algorithm framework is proposed, analyzed, and tested for solving nonlinearly constrained continuous optimization problems. The main setting of interest is when the objective and constraint functions may be nonlinear and/or nonconvex, and when constraint values and derivatives are tractable to compute, but objective function values and derivatives can only be estimated. The algorithm … Read more

Fast Unconstrained Optimization via Hessian Averaging and Adaptive Gradient Sampling Methods

We consider minimizing finite-sum and expectation objective functions via Hessian-averaging based subsampled Newton methods. These methods allow for gradient inexactness and have fixed per-iteration Hessian approximation costs. The recent work (Na et al. 2023) demonstrated that Hessian averaging can be utilized to achieve fast \(\mathcal{O}\left(\sqrt{\frac{\log k}{k}}\right)\) local superlinear convergence for strongly convex functions in high … Read more

Refining asymptotic complexity bounds for nonconvex optimization methods, including why steepest descent is o(eps^{-2}) rather than O(eps^{-2})

We revisit the standard “telescoping sum” argument ubiquitous in the final steps of analyzing evaluation complexity of algorithms for smooth nonconvex optimization, and obtain a refined formulation of the resulting bound as a function of the requested accuracy eps. While bounds obtained using the standard argument typically are of the form \(O(\epsilon^{-\alpha})\) for some positive … Read more

The Rectangular Spiral or the n_1 × n_2 × · · · × n_k Points Problem

A generalization of Ripà’s square spiral solution for the n × n × ··· × n Points Upper Bound Problem. Additionally, we provide a non-trivial lower bound for the k-dimensional n_1 × n_2 × ··· × n_k Points Problem. In this way, we can build a range in which, with certainty, all the best possible … Read more

Global Optimization of Non-Linear Systems of Equations by Simulating the Flight of a Projectile in the Conformational Space

A new heuristic optimization algorithm is presented based on an analogy with the physical phenomenon of a projectile launched in a conformational space under the influence of a gravitational force. Its implementation simplicity and the option to enhance it with local search methods make it ideal for the optimization of non-linear systems of equations. The … Read more

Immunity to Increasing Condition Numbers of Linear Superiorization versus Linear Programming

Given a family of linear constraints and a linear objective function one can consider whether to apply a Linear Programming (LP) algorithm or use a Linear Superiorization (LinSup) algorithm on this data. In the LP methodology one aims at finding a point that fulfills the constraints and has the minimal value of the objective function … Read more

Modified Line Search Sequential Quadratic Methods for Equality-Constrained Optimization with Unified Global and Local Convergence Guarantees

In this paper, we propose a method that has foundations in the line search sequential quadratic programming paradigm for solving general nonlinear equality constrained optimization problems. The method employs a carefully designed modified line search strategy that utilizes second-order information of both the objective and constraint functions, as required, to mitigate the Maratos effect. Contrary … Read more

Unifying nonlinearly constrained nonconvex optimization

Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, … Read more