Decision Rule Approaches for Pessimistic Bilevel Linear Programs under Moment Ambiguity with Facility Location Applications

We study a pessimistic stochastic bilevel program in the context of sequential two-player games, where the leader makes a binary here-and-now decision, and the follower responds a continuous wait-and-see decision after observing the leader’s action and revelation of uncertainty. Only the information of the mean, covariance, and support is known. We formulate the problem as … Read more

Mathematical models and decomposition methods for the two-bar charts packing problem

We consider the two-bar charts packing (2-BCPP), a recent combinatorial optimization problem whose aim is to pack a set of one-dimensional items into the minimum number of bins. As opposed to the well-known bin packing problem, pairs of items are grouped to form bar charts, and a solution is only feasible if the first and … Read more

Preconditioned Gradient Descent for Overparameterized Nonconvex Burer–Monteiro Factorization with Global Optimality Certification

We consider using gradient descent to minimize the nonconvex function $f(X)=\phi(XX^{T})$ over an $n\times r$ factor matrix $X$, in which $\phi$ is an underlying smooth convex cost function defined over $n\times n$ matrices. While only a second-order stationary point $X$ can be provably found in reasonable time, if $X$ is additionally \emph{rank deficient}, then its … Read more

Convergence of Trajectory Following Dynamic Programming algorithms for multistage stochastic problems without finite support assumptions

We introduce a class of algorithms, called Trajectory Following Dynamic Programming (TFDP) algorithms, that iteratively refines approximation of cost-to-go functions of multistage stochastic problems with independent random variables. This framework encompasses most variants of the Stochastic Dual Dynamic Programming algorithm. Leveraging a Lipschitz assumption on the expected cost-to-go functions, we provide a new convergence and … Read more

Lexicographic Branch-and-Bound Column Search

We present an exact generic method for solving the pricing problem in a column generation approach, which we call branch-and-bound column search. It searches the space of all feasible columns via a branch-and-bound tree search and returns all columns with a reduced-cost value below a certainthreshold. The approach is based on an idea from Krumke … Read more

The Analytics of Robust Satisficing: Predict, Optimize, Satisfice, then Fortify

We introduce a novel approach to prescriptive analytics that leverages robust satisficing techniques to determine optimal decisions in situations of risk ambiguity and prediction uncertainty. Our decision model relies on a reward function that incorporates uncertain parameters, which can be partially predicted using available side information. However, the accuracy of the linear prediction model depends … Read more

Accelerated first-order methods for a class of semidefinite programs

This paper introduces a new storage-optimal first-order method (FOM), CertSDP, for solving a special class of semidefinite programs (SDPs) to high accuracy. The class of SDPs that we consider, the exact QMP-like SDPs , is characterized by low-rank solutions, a priori knowledge of the restriction of the SDP solution to a small subspace, and standard … Read more

Primal-dual extrapolation methods for monotone inclusions under local Lipschitz continuity with applications to variational inequality, conic constrained saddle point, and convex conic optimization problems

In this paper we consider a class of structured monotone inclusion (MI) problems that consist of finding a zero in the sum of two monotone operators, in which one is maximal monotone while another is locally Lipschitz continuous. In particular, we first propose a primal-dual extrapolation (PDE) method for solving a structured strongly MI problem … Read more

Accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient

In this paper we develop accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient (LLCG), which is beyond the well-studied class of convex optimization with Lipschitz continuous gradient. In particular, we first consider unconstrained convex optimization with LLCG and propose accelerated proximal gradient (APG) methods for solving it. The proposed APG methods are … Read more