On Solving Chance-Constrained Models with Gaussian Mixture Distribution

We study linear chance-constrained problems where the coefficients follow a Gaussian mixture distribution. We provide mixed-binary quadratic programs that give inner and outer approximations of the chance constraint based on piecewise linear approximations of the standard normal cumulative density function. We show that $O\left(\sqrt{\ln(1/\tau)/\tau} \right)$ pieces are sufficient to attain $\tau$-accuracy in the chance constraint. … Read more

On the Convergence of Constrained Gradient Method

The constrained gradient method (CGM) has recently been proposed to solve convex optimization and monotone variational inequality (VI) problems with general functional constraints. While existing literature has established convergence results for CGM, the assumptions employed therein are quite restrictive; in some cases, certain assumptions are mutually inconsistent, leading to gaps in the underlying analysis. This … Read more

Extended-Krylov-subspace methods for trust-region and norm-regularization subproblems

We consider an effective new method for solving trust-region and norm-regularization problems that arise as subproblems in many optimization applications. We show that the solutions to such subproblems lie on a manifold of approximately very low rank as a function of their controlling parameters (trust-region radius or regularization weight). Based on this, we build a … Read more

Preconditioned subgradient method for composite optimization: overparameterization and fast convergence

Composite optimization problems involve minimizing the composition of a smooth map with a convex function. Such objectives arise in numerous data science and signal processing applications, including phase retrieval, blind deconvolution, and collaborative filtering. The subgradient method achieves local linear convergence when the composite loss is well-conditioned. However, if the smooth map is, in a … Read more

Lyapunov-based Analysis on First Order Method for Composite Strong-Weak Convex Functions

The Nesterov’s accelerated gradient (NAG) method generalizes the classical gradient descent algorithm by improving the convergence rate from $\mathcal{O}\left(\frac{1}{t}\right)$ to $\mathcal{O}\left(\frac{1}{t^2}\right)$ in convex optimization. This study examines the proximal gradient framework for additively separable composite functions with smooth and non-smooth components. We demonstrate that Nesterov’s accelerated proximal gradient (NAPG$_\alpha$) method attains a convergence rate of … Read more

An Exact Penalty Method for Stochastic Equality-Constrained Optimization

In this paper, we study a penalty method for stochastic equality-constrained optimization, where both the objective and constraints are expressed in general expectation form. We introduce a novel adaptive strategy for updating the penalty parameter, guided by iteration progress to balance reductions in the penalty function with improvements in constraint violation, while each penalty subproblem … Read more

GFORS: GPU-Accelerated First-Order Method with Randomized Sampling for Binary Integer Programs

We present GFORS, a GPU-accelerated framework for large binary integer programs. It couples a first-order (PDHG-style) routine that guides the search in the continuous relaxation with a randomized, feasibility-aware sampling module that generates batched binary candidates. Both components are designed to run end-to-end on GPUs with minimal CPU–GPU synchronization. The framework establishes near-stationary-point guarantees for … Read more

A guided tour through the zoo of paired optimization problems

Many mathematical models base on the coupling of two or more optimization problems. This paper surveys possibilities to couple two optimization problems and discusses how solutions of the different models are interrelated with each other. The considered pairs stem from the fields of standard and generalized Nash equilibrium problems, optimistic and pessimistic bilevel problems, saddle … Read more

blockSQP 2: exploiting structure to improve performance

Abstract One approach to solving optimal control problems is Bock’s direct multiple shoot- ing method. This method yields lifted nonlinear optimization problems (NLPs) with a spe- cific block structure. Exploiting this structure via tailored optimization algorithms can be computationally beneficial. We propose such methods, primarily within the framework of fil- ter line search sequential quadratic … Read more