Universal nonmonotone line search method for nonconvex multiobjective optimization problems with convex constraints

In this work we propose a general nonmonotone line-search method for nonconvex multi-objective optimization problems with convex constraints. At the \(k\)th iteration, the degree of nonmonotonicity is controlled by a vector \(\nu_{k}\) with nonnegative components. Different choices for \(\nu_{k}\) lead to different nonmonotone step-size rules. Assuming that the sequence \(\left\{\nu_{k}\right\}_{k\geq 0}\) is summable, and that … Read more

Accelerating Benders decomposition for solving a sequence of sample average approximation replications

Sample average approximation (SAA) is a technique for obtaining approximate solutions to stochastic programs that uses the average from a random sample to approximate the expected value that is being optimized. Since the outcome from solving an SAA is random, statistical estimates on the optimal value of the true problem can be obtained by solving … Read more

Sparse Principal Component Analysis with Non-Oblivious Adversarial Perturbations

Sparse Principal Component Analysis (sparse PCA) is a fundamental dimension-reduction tool that enhances interpretability in various high-dimensional settings. An important variant of sparse PCA studies the scenario when samples are adversarially perturbed. Notably, most existing statistical studies on this variant focus on recovering the ground truth and verifying the robustness of classical algorithms when the … Read more

Gradient Methods with Online Scaling

We introduce a framework to accelerate the convergence of gradient-based methods with online learning. The framework learns to scale the gradient at each iteration through an online learning algorithm and provably accelerates gradient-based methods asymptotically. In contrast with previous literature, where convergence is established based on worst-case analysis, our framework provides a strong convergence guarantee … Read more

Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization

The exponential growth in data availability in recent years has led to new formulations of data-driven optimization problems. One such formulation is that of stochastic optimization problems with contextual information, where the goal is to optimize the expected value of a certain function given some contextual information (also called features) that accompany the main data … Read more

A Trust-Region Algorithm for Noisy Equality Constrained Optimization

This paper introduces a modified Byrd-Omojokun (BO) trust region algorithm to address the challenges posed by noisy function and gradient evaluations. The original BO method was designed to solve equality constrained problems and it forms the backbone of some interior point methods for general large-scale constrained optimization. A key strength of the BO method is … Read more

Single-Scenario Facet Preservation for Stochastic Mixed-Integer Programs

We consider improving the polyhedral representation of the extensive form of a stochastic mixed-integer program (SMIP). Given a facet for a single-scenario version of an SMIP, our main result provides necessary and sufficient conditions under which this inequality remains facet-defining for the extensive form. We then present several implications, which show that common recourse structures … Read more

Convergence of subgradient extragradient methods with novel stepsizes for equilibrium problems in Hilbert spaces

In this paper, by combining the inertial technique and subgradient extragradient method with a new strategy of stepsize selection, we propose a novel extragradient method to solve pseudomonotone equilibrium problems in real Hilbert spaces. Our method is designed such that the stepsize sequence is increasing after a finite number of iterations. This distinguishes our method … Read more

On the ReLU Lagrangian Cuts for Stochastic Mixed Integer Programming

We study stochastic mixed integer programs with both first-stage and recourse decisions involving mixed integer variables. A new family of Lagrangian cuts, termed “ReLU Lagrangian cuts,” is introduced by reformulating the nonanticipativity constraints using ReLU functions. These cuts can be integrated into scenario decomposition methods. We show that including ReLU Lagrangian cuts is sufficient to … Read more