GFORS: GPU-Accelerated First-Order Method with Randomized Sampling for Binary Integer Programs

We present GFORS, a GPU-accelerated framework for large binary integer programs. It couples a first-order (PDHG-style) routine that guides the search in the continuous relaxation with a randomized, feasibility-aware sampling module that generates batched binary candidates. Both components are designed to run end-to-end on GPUs with minimal CPU–GPU synchronization. The framework establishes near-stationary-point guarantees for … Read more

A guided tour through the zoo of paired optimization problems

Many mathematical models base on the coupling of two or more optimization problems. This paper surveys possibilities to couple two optimization problems and discusses how solutions of the different models are interrelated with each other. The considered pairs stem from the fields of standard and generalized Nash equilibrium problems, optimistic and pessimistic bilevel problems, saddle … Read more

blockSQP 2: exploiting structure to improve performance

Abstract One approach to solving optimal control problems is Bock’s direct multiple shoot- ing method. This method yields lifted nonlinear optimization problems (NLPs) with a spe- cific block structure. Exploiting this structure via tailored optimization algorithms can be computationally beneficial. We propose such methods, primarily within the framework of fil- ter line search sequential quadratic … Read more

A Finite-Difference Trust-Region Method for Convexly Constrained Smooth Optimization

We propose a derivative-free trust-region method based on finite-difference gradient approximations for smooth optimization problems with convex constraints. The proposed method does not require computing an approximate stationarity measure. For nonconvex problems, we establish a worst-case complexity bound of \(\mathcal{O}\!\left(n\left(\frac{L}{\sigma}\epsilon\right)^{-2}\right)\) function evaluations for the method to reach an \(\left(\frac{L}{\sigma}\epsilon\right)\)-approximate stationary point, where \(n\) is the … Read more

An Inexact General Descent Method with Applications in Differential Equation-Constrained Optimization

In many applications, gradient evaluations are inherently approximate, motivating the development of optimization methods that remain reliable under inexact first-order information. A common strategy in this context is adaptive evaluation, whereby coarse gradients are used in early iterations and refined near a minimizer. This is particularly relevant in differential equation–constrained optimization (DECO), where discrete adjoint … Read more

A Simple First-Order Algorithm for Full-Rank Equality Constrained Optimization

A very simple first-order algorithm is proposed for solving nonlinear optimization problems with deterministic nonlinear equality constraints. This algorithm adaptively selects steps in the plane tangent to the constraints or steps that reduce infeasibility, without using a merit function or a filter. The tangent steps are based on the AdaGrad method for unconstrained minimization. The … Read more

Optimization in Theory and Practice

Algorithms for continuous optimization problems have a rich history of design and innovation over the past several decades, in which mathematical analysis of their convergence and complexity properties plays a central role. Besides their theoretical properties, optimization algorithms are interesting also for their practical usefulness as computational tools for solving real-world problems. There are often … Read more

Adaptive Conditional Gradient Descent

Selecting an effective step-size is a fundamental challenge in first-order optimization, especially for problems with non-Euclidean geometries. This paper presents a novel adaptive step-size strategy for optimization algorithms that rely on linear minimization oracles, as used in the Conditional Gradient or non-Euclidean Normalized Steepest Descent algorithms. Using a simple heuristic to estimate a local Lipschitz … Read more