Capacity planning with uncertain endogenous technology learning

Optimal capacity expansion requires complex decision-making, often influenced by technology learning, which represents the reduction in expansion cost due to factors such as cumulative installed capacity. However, having perfect foresight over the technology cost reduction is highly unlikely. In this work, we develop a multistage stochastic programming framework to model capacity planning problems with endogenous … Read more

BilevelJuMP.jl: Modeling and Solving Bilevel Optimization in Julia

In this paper we present BilevelJuMP, a new Julia package to support bilevel optimization within the JuMP framework. The package is a Julia library that enables the user to describe both upper and lower-level optimization problems using the JuMP algebraic syntax. Due to the generality and flexibility our library inherits from JuMP’s syntax, our package … Read more

Duality in convex stochastic optimization

This paper studies duality and optimality conditions in general convex stochastic optimization problems introduced by Rockafellar and Wets in \cite{rw76}. We derive an explicit dual problem in terms of two dual variables, one of which is the shadow price of information while the other one gives the marginal cost of a perturbation much like in … Read more

Small polygons with large area

A polygon is {\em small} if it has unit diameter. The maximal area of a small polygon with a fixed number of sides $n$ is not known when $n$ is even and $n\geq14$. We determine an improved lower bound for the maximal area of a small $n$-gon for this case. The improvement affects the $1/n^3$ … Read more

Efficient Use of Quantum Linear System Algorithms in Interior Point Methods for Linear Optimization

Quantum computing has attracted significant interest in the optimization community because it potentially can solve classes of optimization problems faster than conventional supercomputers. Several researchers proposed quantum computing methods, especially Quantum Interior Point Methods (QIPMs), to solve convex optimization problems, such as Linear Optimization, Semidefinite Optimization, and Second-order Cone Optimization problems. Most of them have … Read more

Exact SDP relaxations for quadratic programs with bipartite graph structures

For nonconvex quadratically constrained quadratic programs (QCQPs), we first show that, under certain feasibility conditions, the standard semidefinite (SDP) relaxation is exact for QCQPs with bipartite graph structures. The exact optimal solutions are obtained by examining the dual SDP relaxation and the rank of the optimal solution of this dual SDP relaxation under strong duality. … Read more

Generalizations of doubly nonnegative cones and their comparison

In this study, we examine the various extensions of the doubly nonnegative (DNN) cone, frequently used in completely positive programming (CPP) to achieve a tighter relaxation than the positive semidefinite cone. To provide tighter relaxation for generalized CPP (GCPP) than the positive semidefinite cone, inner-approximation hierarchies of the generalized copositive cone are exploited to obtain … Read more

Accelerating nuclear-norm regularized low-rank matrix optimization through Burer-Monteiro decomposition

This work proposes a rapid algorithm, BM-Global, for nuclear-norm-regularized convex and low-rank matrix optimization problems. BM-Global efficiently decreases the objective value via low-cost steps leveraging the nonconvex but smooth Burer-Monteiro (BM) decomposition, while effectively escapes saddle points and spurious local minima ubiquitous in the BM form to obtain guarantees of fast convergence rates to the … Read more

A nearly linearly convergent first-order method for nonsmooth functions with quadratic growth

Classical results show that gradient descent converges linearly to minimizers of smooth strongly convex functions. A natural question is whether there exists a locally nearly linearly convergent method for nonsmooth functions with quadratic growth. This work designs such a method for a wide class of nonsmooth and nonconvex locally Lipschitz functions, including max-of-smooth, Shapiro’s decomposable … Read more

Error Bound and Isocost Imply Linear Convergence of DCA-based Algorithms to D-stationarity

We consider a class of structured nonsmooth difference-of-convex minimization, which can be written as the difference of two convex functions possibly nonsmooth with the second one in the format of the maximum of a finite convex smooth functions. We propose two extrapolation proximal difference-of-convex based algorithms for potential acceleration to converge to a weak/standard d-stationary … Read more