Alternating Methods for Large-Scale AC Optimal Power Flow with Unit Commitment

Security-constrained unit commitment with alternating current optimal power flow (SCUC-ACOPF) is a central problem in power grid operations that optimizes commitment and dispatch of generators under a physically accurate power transmission model while encouraging robustness against component failures.  SCUC-ACOPF requires solving large-scale problems that involve multiple time periods and networks with thousands of buses within … Read more

Fast Stochastic Second-Order Adagrad for Nonconvex Bound-Constrained Optimization

ADAGB2, a generalization of the Adagrad algorithm for stochastic optimization is introduced, which is also applicable to bound-constrained problems and capable of using second-order information when available. It is shown that, given  delta in (0,1) and epsilon in (0,1], the ADAGB2 algorithm needs at most O(epsilon^{-2}) iterations to ensure an epsilon-approximate first-order critical point of … Read more

PDCS: A Primal-Dual Large-Scale Conic Programming Solver with GPU Enhancements

In this paper, we introduce the Primal-Dual Conic Programming Solver (PDCS), a large-scale conic programming solver with GPU enhancements. Problems that PDCS currently supports include linear programs, second-order cone programs, convex quadratic programs, and exponential cone programs. PDCS achieves scalability to large-scale problems by leveraging sparse matrix-vector multiplication as its core computational operation, which is … Read more

A Graphical Global Optimization Framework for Parameter Estimation of Statistical Models with Nonconvex Regularization Functions

Optimization problems with norm-bounding constraints appear in various applications, from portfolio optimization to machine learning, feature selection, and beyond. A widely used variant of these problems relaxes the norm-bounding constraint through Lagrangian relaxation and moves it to the objective function as a form of penalty or regularization term. A challenging class of these models uses … Read more

A Fast Newton Method Under Local Lipschitz Smoothness

A new, fast second-order method is proposed that achieves the optimal \(\mathcal{O}\left(|\log(\epsilon)|\epsilon^{-3/2}\right) \) complexity to obtain first-order $\epsilon$-stationary points. Crucially, this is deduced without assuming the standard global Lipschitz Hessian continuity condition, but onlyusing an appropriate local smoothness requirement. The algorithm exploits Hessian information to compute a Newton step and a negative curvature step when … Read more

An Adaptive Stochastic Dual Progressive Hedging Algorithm for Stochastic Programming

The Progressive Hedging (PH) algorithm is one of the cornerstones in large-scale stochastic programming. However, its traditional development requires that all scenario subproblems are solved per iteration, and a probability distribution with finitely many outcomes. This paper introduces a stochastic dual PH algorithm (SDPH) to overcome these challenges. We introduce an adaptive sampling process and … Read more

Quadratic Convex Reformulations for MultiObjective Binary Quadratic Programming

Multiobjective binary quadratic programming refers to optimization problems involving multiple quadratic – potentially non-convex – objective functions and a feasible set that includes binary constraints on the variables. In this paper, we extend the well-established Quadratic Convex Reformulation technique, originally developed for single-objective binary quadratic programs, to the multiobjective setting. We propose a branch-and-bound algorithm … Read more

A double-accelerated proximal augmented Lagrangian method with applications in signal reconstruction

The Augmented Lagrangian Method (ALM), firstly proposed in 1969, remains a vital framework in large-scale constrained optimization. This paper addresses a linearly constrained composite convex minimization problem and presents a general proximal ALM that incorporates both Nesterov acceleration and relaxed acceleration, while enjoying indefinite proximal terms. Under mild assumptions (potentially without requiring prior knowledge of … Read more

Negative Stepsizes Make Gradient-Descent-Ascent Converge

Efficient computation of min-max problems is a central question in optimization, learning, games, and controls. Arguably the most natural algorithm is gradient-descent-ascent (GDA). However, since the 1970s, conventional wisdom has argued that GDA fails to converge even on simple problems. This failure spurred an extensive literature on modifying GDA with additional building blocks such as … Read more

Paving the Way for More Accessible Cancer Care in Low-Income Countries with Optimization

Cancers are a growing cause of morbidity and mortality in low-income countries. Geographic access plays a key role in both timely diagnosis and successful treatment. In areas lacking well-developed road networks, seasonal weather events can lengthen already long travel times to access care. Expanding facilities to offer cancer care is expensive and requires staffing by … Read more