On liftings that improve convergence properties of Newton’s Method for Boundary Value Optimization Problems

The representation of a function in a higher-dimensional space is often referred to as lifting. Liftings can be used to reduce complexity. We are interested in the question of how liftings affect the local convergence of Newton’s method. We propose algorithms to construct liftings that potentially reduce the number of iterations via analysis of local … Read more

A Sound Local Regret Methodology for Online Nonconvex Composite Optimization

Online nonconvex optimization addresses dynamic and complex decision-making problems arising in real-world decision-making tasks where the optimizer’s objective evolves with the intricate and changing nature of the underlying system. This paper studies an online nonconvex composite optimization model with limited first-order access, encompassing a wide range of practical scenarios. We define local regret using a … Read more

An Interior-Point Algorithm for Continuous Nonlinearly Constrained Optimization with Noisy Function and Derivative Evaluations

An algorithm based on the interior-point methodology for solving continuous nonlinearly constrained optimization problems is proposed, analyzed, and tested. The distinguishing feature of the algorithm is that it presumes that only noisy values of the objective and constraint functions and their first-order derivatives are available. The algorithm is based on a combination of a previously … Read more

Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent

This paper investigates the convergence properties of the hypergradient descent method (HDM), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of HDM using the online learning framework of [Gao24] and apply this analysis to develop new state-of-the-art adaptive gradient methods with empirical and … Read more

An Augmented Lagrangian Approach to Bi-Level Optimization via an Equilibrium Constrained Problem

Optimization problems involving equilibrium constraints capture diverse optimization settings such as bi-level optimization, min-max problems and games, and the minimization over non-linear constraints. This paper introduces an Augmented Lagrangian approach with Hessian-vector product approximation to address an equilibrium constrained nonconvex nonsmooth optimization problem. The underlying model in particular captures various settings of bi-level optimization problems, … Read more

Convergence of Descent Optimization Algorithms under Polyak-Lojasiewicz-Kurdyka Conditions

This paper develops a comprehensive convergence analysis for generic classes of descent algorithms in nonsmooth and nonconvex optimization under several conditions of the Polyak-Lojasiewicz-Kurdyka (PLK) type. Along other results, we prove the finite termination of generic algorithms under the PLK conditions with lower exponents. Specifications are given to establish new convergence rates for inexact reduced … Read more

prunAdag: an adaptive pruning-aware gradient method

A pruning-aware adaptive gradient method is proposed which classifies the variables in two sets before updating them using different strategies. This technique extends the “relevant/irrelevant” approach of Ding (2019) and Zimmer et al. (2022) and allows a posteriori sparsification of the solution of model parameter fitting problems. The new method is proved to be convergent … Read more

A necessary condition for the guarantee of the superiorization method

We study a method that involves principally convex feasibility-seeking and makes secondary efforts of objective function value reduction. This is the well-known superiorization method (SM), where the iterates of an asymptotically convergent iterative feasibility-seeking algorithm are perturbed by objective function nonascent steps. We investigate the question under what conditions a sequence generated by an SM … Read more

Newtonian Methods with Wolfe Linesearch in Nonsmooth Optimization and Machine Learning

This paper introduces and develops coderivative-based Newton methods with Wolfe linesearch conditions to solve various classes of problems in nonsmooth optimization and machine learning. We first propose a generalized regularized Newton method with Wolfe linesearch (GRNM-W) for unconstrained $C^{1,1}$ minimization problems (which are second-order nonsmooth) and establish global as well as local superlinear convergence of … Read more

A Decomposition Framework for Nonlinear Nonconvex Two-Stage Optimization

We propose a new decomposition framework for continuous nonlinear constrained two-stage optimization, where both first- and second-stage problems can be nonconvex. A smoothing technique based on an interior-point formulation renders the optimal solution of the second-stage problem differentiable with respect to the first-stage parameters. As a consequence, efficient off-the-shelf optimization packages can be utilized. We … Read more