A general merit function-based global convergent framework for nonlinear optimization

In this paper, we revisit the convergence theory of the inexact restoration paradigm for non-linear optimization. The paper first identifies the basic elements of a globally convergent method based on merit functions. Then, the inexact restoration method that employs a two-phase iteration is introduced as a special case. A specific implementation is presented that is … Read more

The Proximal Bundle Algorithm Under a Frank-Wolfe Perspective: an Improved Complexity Analysis

The proximal bundle algorithm (PBA) is a fundamental and computationally effective algorithm for solving optimization problems with nonsmooth components. We investigate its convergence rate, focusing on composite settings where one function is smooth and the other is piecewise linear. We interpret a sequence of null steps of the PBA as a Frank-Wolfe algorithm on the … Read more

Sparse Polynomial Matrix Optimization

A polynomial matrix inequality is a statement that a symmetric polynomial matrix is positive semidefinite over a given constraint set. Polynomial matrix optimization concerns minimizing the smallest eigenvalue of a symmetric polynomial matrix subject to a tuple of polynomial matrix inequalities. This work explores the use of sparsity methods in reducing the complexity of sum-of-squares … Read more

Decision-focused predictions via pessimistic bilevel optimization: complexity and algorithms

Dealing with uncertainty in optimization parameters is an important and longstanding challenge. Typically, uncertain parameters are predicted accurately, and then a deterministic optimization problem is solved. However, the decisions produced by this so-called predict-then-optimize procedure can be highly sensitive to uncertain parameters. In this work, we contribute to recent efforts in producing  decision-focused predictions, i.e., … Read more

Unboundedness in Bilevel Optimization

Bilevel optimization has garnered growing interest over the past decade. However, little attention has been paid to detecting and dealing with unboundedness in these problems, with most research assuming a bounded high-point relaxation. In this paper, we address unboundedness in bilevel and multilevel optimization by studying its computational complexity. We show that deciding whether an … Read more

An analytical lower bound for a class of minimizing quadratic integer optimization problems

Lower bounds on minimization problems are essential for convergence of both branching-based and iterative solution methods for optimization problems. They are also required for evaluating the quality of feasible solutions by providing conservative optimality gaps. We provide an analytical lower bound for a class of quadratic optimization problems with binary decision variables. In contrast to … Read more

Exploiting Negative Curvature in Conjunction with Adaptive Sampling: Theoretical Results and a Practical Algorithm

In this paper, we propose algorithms that exploit negative curvature for solving noisy nonlinear nonconvex unconstrained optimization problems. We consider both deterministic and stochastic inexact settings, and develop two-step algorithms that combine directions of negative curvature and descent directions to update the iterates. Under reasonable assumptions, we prove second-order convergence results and derive complexity guarantees … Read more

New efficient accelerated and stochastic gradient descent algorithms based on locally Lipschitz gradient constants

In this paper, we revisit the recent stepsize applied for the gradient descent scheme which is called NGD proposed by [Hoai et al., A novel stepsize for gradient descent method, Operations Research Letters (2024) 53, doi: \href{10.1016/j.orl.2024.107072}{10.1016/j.orl.2024.107072}]. We first investigate NGD stepsize with two well-known accelerated techniques which are Heavy ball and Nesterov’s methods. In … Read more

Gradient Methods with Online Scaling

We introduce a framework to accelerate the convergence of gradient-based methods with online learning. The framework learns to scale the gradient at each iteration through an online learning algorithm and provably accelerates gradient-based methods asymptotically. In contrast with previous literature, where convergence is established based on worst-case analysis, our framework provides a strong convergence guarantee … Read more

A Trust-Region Algorithm for Noisy Equality Constrained Optimization

This paper introduces a modified Byrd-Omojokun (BO) trust region algorithm to address the challenges posed by noisy function and gradient evaluations. The original BO method was designed to solve equality constrained problems and it forms the backbone of some interior point methods for general large-scale constrained optimization. A key strength of the BO method is … Read more