Extended-Krylov-subspace methods for trust-region and norm-regularization subproblems

We consider an effective new method for solving trust-region and norm-regularization problems that arise as subproblems in many optimization applications. We show that the solutions to such subproblems lie on a manifold of approximately very low rank as a function of their controlling parameters (trust-region radius or regularization weight). Based on this, we build a … Read more

Preconditioned subgradient method for composite optimization: overparameterization and fast convergence

Composite optimization problems involve minimizing the composition of a smooth map with a convex function. Such objectives arise in numerous data science and signal processing applications, including phase retrieval, blind deconvolution, and collaborative filtering. The subgradient method achieves local linear convergence when the composite loss is well-conditioned. However, if the smooth map is, in a … Read more

Lyapunov-based Analysis on First Order Method for Composite Strong-Weak Convex Functions

The Nesterov’s accelerated gradient (NAG) method generalizes the classical gradient descent algorithm by improving the convergence rate from $\mathcal{O}\left(\frac{1}{t}\right)$ to $\mathcal{O}\left(\frac{1}{t^2}\right)$ in convex optimization. This study examines the proximal gradient framework for additively separable composite functions with smooth and non-smooth components. We demonstrate that Nesterov’s accelerated proximal gradient (NAPG$_\alpha$) method attains a convergence rate of … Read more

An Exact Penalty Method for Stochastic Equality-Constrained Optimization

In this paper, we study a penalty method for stochastic equality-constrained optimization, where both the objective and constraints are expressed in general expectation form. We introduce a novel adaptive strategy for updating the penalty parameter, guided by iteration progress to balance reductions in the penalty function with improvements in constraint violation, while each penalty subproblem … Read more

GFORS: GPU-Accelerated First-Order Method with Randomized Sampling for Binary Integer Programs

We present GFORS, a GPU-accelerated framework for large binary integer programs. It couples a first-order (PDHG-style) routine that guides the search in the continuous relaxation with a randomized, feasibility-aware sampling module that generates batched binary candidates. Both components are designed to run end-to-end on GPUs with minimal CPU–GPU synchronization. The framework establishes near-stationary-point guarantees for … Read more

A guided tour through the zoo of paired optimization problems

Many mathematical models base on the coupling of two or more optimization problems. This paper surveys possibilities to couple two optimization problems and discusses how solutions of the different models are interrelated with each other. The considered pairs stem from the fields of standard and generalized Nash equilibrium problems, optimistic and pessimistic bilevel problems, saddle … Read more

blockSQP 2: exploiting structure to improve performance

Abstract One approach to solving optimal control problems is Bock’s direct multiple shoot- ing method. This method yields lifted nonlinear optimization problems (NLPs) with a spe- cific block structure. Exploiting this structure via tailored optimization algorithms can be computationally beneficial. We propose such methods, primarily within the framework of fil- ter line search sequential quadratic … Read more

A Finite-Difference Trust-Region Method for Convexly Constrained Smooth Optimization

We propose a derivative-free trust-region method based on finite-difference gradient approximations for smooth optimization problems with convex constraints. The proposed method does not require computing an approximate stationarity measure. For nonconvex problems, we establish a worst-case complexity bound of \(\mathcal{O}\!\left(n\left(\frac{L}{\sigma}\epsilon\right)^{-2}\right)\) function evaluations for the method to reach an \(\left(\frac{L}{\sigma}\epsilon\right)\)-approximate stationary point, where \(n\) is the … Read more

An Inexact General Descent Method with Applications in Differential Equation-Constrained Optimization

In many applications, gradient evaluations are inherently approximate, motivating the development of optimization methods that remain reliable under inexact first-order information. A common strategy in this context is adaptive evaluation, whereby coarse gradients are used in early iterations and refined near a minimizer. This is particularly relevant in differential equation–constrained optimization (DECO), where discrete adjoint … Read more