Inverse Optimization via Learning Feasible Regions

We study inverse optimization (IO), where the goal is to use a parametric optimization program as the hypothesis class to infer relationships between input-decision pairs. Most of the literature focuses on learning only the objective function, as learning the constraint function (i.e., feasible regions) leads to nonconvex training programs. Motivated by this, we focus on … Read more

cuHALLaR: A GPU accelerated low-rank augmented Lagrangian method for large-scale semidefinite programming

This paper introduces cuHALLaR, a GPU-accelerated implementation of the HALLaR method proposed in Monteiro et al. 2024 for solving large-scale semidefinite programming (SDP) problems. We demonstrate how our Julia-based implementation efficiently uses GPU parallelism through optimization of simple, but key, operations, including linear maps, adjoints, and gradient evaluations. Extensive numerical experiments across three problem classes—maximum … Read more

Subgradient Regularization: A Descent-Oriented Subgradient Method for Nonsmooth Optimization

In nonsmooth optimization, a negative subgradient is not necessarily a descent direction, making the design of convergent descent methods based on zeroth-order and first-order information a challenging task. The well-studied bundle methods and gradient sampling algorithms construct descent directions by aggregating subgradients at nearby points in seemingly different ways, and are often complicated or lack … Read more

An Adaptive and Parameter-Free Nesterov’s Accelerated Gradient Method for Convex Optimization

We propose AdaNAG, an adaptive accelerated gradient method based on Nesterov’s accelerated gradient method. AdaNAG is line-search-free, parameter-free, and achieves the accelerated convergence rates \( f(x_k) – f_\star = \mathcal{O}\left(1/k^2\right) \) and \( \min_{i\in\left\{1,\dots, k\right\}} \|\nabla f(x_i)\|^2 = \mathcal{O}\left(1/k^3\right) \) for \( L \)-smooth convex function \( f \). We provide a Lyapunov analysis for … Read more

Convergence of Mean-Field Langevin Stochastic Descent-Ascent for Distributional Minimax Optimization

We study convergence properties of the discrete-time Mean-Field Langevin Stochastic Descent-Ascent (MFL-SDA) algorithm for solving distributional minimax optimization. These problems arise in various applications, such as zero-sum games, generative adversarial networks and distributionally robust learning. Despite the significance of MFL-SDA in these contexts, the discrete-time convergence rate remains underexplored. To address this gap, we establish … Read more

A stochastic gradient method for trilevel optimization

With the success that the field of bilevel optimization has seen in recent years, similar methodologies have started being applied to solving more difficult applications that arise in trilevel optimization. At the helm of these applications are new machine learning formulations that have been proposed in the trilevel context and, as a result, efficient and … Read more

An inexact alternating projection method with application to matrix completion

We develop and analyze an inexact regularized alternating projection method for nonconvex feasibility problems. Such a method employs inexact projections on one of the two sets, according to a set of well-defined conditions. We prove the global convergence of the algorithm, provided that a certain merit function satisfies the Kurdyka-Lojasiewicz property on its domain. The … Read more

Steepest descent method using novel adaptive stepsizes for unconstrained nonlinear multiobjective programming

We propose new adaptive strategies to compute stepsizes for the steepest descent method to solve unconstrained nonlinear multiobjective optimization problems without employing any linesearch procedure. The resulting algorithms can be applied to a wide class of nonconvex unconstrained multi-criteria optimization problems satisfying a global Lipschitz continuity condition imposed on the gradients of all objectives. In … Read more

A Symmetric Primal-Dual method with two extrapolation steps for Composite Convex Optimization

Symmetry is a recurring feature in algorithms for monotone operator theory and convex optimization, particularly in problems involving the sum of two operators, as exemplified by the Peaceman–Rachford splitting scheme. However, in more general settings—such as composite optimization problems with three convex functions or structured convex-concave saddle-point formulations—existing algorithms often exhibit inherent asymmetry. In particular, … Read more

a surplus-maximizing two-sided multi-period non-convex iso auction market

Since the inception of ISOs, Locational Marginal Prices (LMPs) alone were not market clearing or incentive compatible because an auction winner who offered its avoidable costs could lose money at the LMPs. ISOs used make-whole payments to ensure that market participants did not lose money. Make-whole payments were not public, creating transparency issues. Over time, … Read more