Fast Unconstrained Optimization via Hessian Averaging and Adaptive Gradient Sampling Methods

We consider minimizing finite-sum and expectation objective functions via Hessian-averaging based subsampled Newton methods. These methods allow for gradient inexactness and have fixed per-iteration Hessian approximation costs. The recent work (Na et al. 2023) demonstrated that Hessian averaging can be utilized to achieve fast \(\mathcal{O}\left(\sqrt{\frac{\log k}{k}}\right)\) local superlinear convergence for strongly convex functions in high … Read more

Refining asymptotic complexity bounds for nonconvex optimization methods, including why steepest descent is o(eps^{-2}) rather than O(eps^{-2})

We revisit the standard “telescoping sum” argument ubiquitous in the final steps of analyzing evaluation complexity of algorithms for smooth nonconvex optimization, and obtain a refined formulation of the resulting bound as a function of the requested accuracy eps. While bounds obtained using the standard argument typically are of the form \(O(\epsilon^{-\alpha})\) for some positive … Read more

The Rectangular Spiral or the n_1 × n_2 × · · · × n_k Points Problem

A generalization of Ripà’s square spiral solution for the n × n × ··· × n Points Upper Bound Problem. Additionally, we provide a non-trivial lower bound for the k-dimensional n_1 × n_2 × ··· × n_k Points Problem. In this way, we can build a range in which, with certainty, all the best possible … Read more

Global Optimization of Non-Linear Systems of Equations by Simulating the Flight of a Projectile in the Conformational Space

A new heuristic optimization algorithm is presented based on an analogy with the physical phenomenon of a projectile launched in a conformational space under the influence of a gravitational force. Its implementation simplicity and the option to enhance it with local search methods make it ideal for the optimization of non-linear systems of equations. The … Read more

Immunity to Increasing Condition Numbers of Linear Superiorization versus Linear Programming

Given a family of linear constraints and a linear objective function one can consider whether to apply a Linear Programming (LP) algorithm or use a Linear Superiorization (LinSup) algorithm on this data. In the LP methodology one aims at finding a point that fulfills the constraints and has the minimal value of the objective function … Read more

Modified Line Search Sequential Quadratic Methods for Equality-Constrained Optimization with Unified Global and Local Convergence Guarantees

In this paper, we propose a method that has foundations in the line search sequential quadratic programming paradigm for solving general nonlinear equality constrained optimization problems. The method employs a carefully designed modified line search strategy that utilizes second-order information of both the objective and constraint functions, as required, to mitigate the Maratos effect. Contrary … Read more

Unifying nonlinearly constrained nonconvex optimization

Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, … Read more

Nonlinear Derivative-free Constrained Optimization with a Mixed Penalty-Logarithmic Barrier Approach and Direct Search

In this work, we propose the joint use of a mixed penalty-logarithmic barrier approach and generating set search, for addressing nonlinearly constrained derivative-free optimization problems. A merit function is considered, wherein the set of inequality constraints is divided into two groups: one treated with a logarithmic barrier approach, and another, along with the equality constraints, … Read more

Local Convergence Analysis for Nonisolated Solutions to Derivative-Free Methods of Optimization

This paper provides a local convergence analysis for newly developed derivative-free methods in problems of smooth nonconvex optimization. We focus here on local convergence to local minimizers, which might be nonisolated and hence more challenging for convergence analysis. The main results provide efficient conditions for local convergence to arbitrary local minimizers under the fulfillment of … Read more

An exact method for a class of robust nonlinear optimization problems

We introduce a novel exact approach for addressing a broad spectrum of optimization problems with robust nonlinear constraints. These constraints are defined as sums of products of linear times concave (SLC) functions with respect to the uncertain parameters. Our approach synergizes a cutting set method with reformulation-perspectification techniques and branch and bound. We further extend … Read more