Using two-dimensional Projections for Stronger Separation and Propagation of Bilinear Terms

One of the most fundamental ingredients in mixed-integer nonlinear programming solvers is the well- known McCormick relaxation for a product of two variables x and y over a box-constrained domain. The starting point of this paper is the fact that the convex hull of the graph of xy can be much tighter when computed over … Read more

An Enhanced Logical Benders Approach for Linear Programs with Complementarity

This work extends the ones of Hu et al. (2008) and Bai et al. (2013) of a logical Benders approach for globally solving Linear Programs with Complementarity Constraints. By interpreting the logical Benders method as a reversed branch-and-bound method, where the whole exploration procedure starts from the leaf nodes in an enumeration tree, we provide … Read more

Inertial Block Mirror Descent Method for Non-Convex Non-Smooth Optimization

In this paper, we propose inertial versions of block coordinate descent methods for solving non-convex non-smooth composite optimization problems. We use the general framework of Bregman distance functions to compute the proximal maps. Our method not only allows using two different extrapolation points to evaluate gradients and adding the inertial force, but also takes advantage … Read more

On Electricity Market Equilibria with Storage: Modeling, Uniqueness, and a Distributed ADMM

We consider spot-market trading of electricity including storage operators as additional agents besides producers and consumers. Storages allow for shifting produced electricity from one time period to a later one. Due to this, multiple market equilibria may occur even if classical uniqueness assumptions for the case without storages are satisfied. For models containing storage operators, … Read more

Limited-Memory BFGS with Displacement Aggregation

A displacement aggregation strategy is proposed for the curvature pairs stored in a limited-memory BFGS (a.k.a. L-BFGS) method such that the resulting (inverse) Hessian approximations are equal to those that would be derived from a full-memory BFGS method. This means that, if a sufficiently large number of pairs are stored, then an optimization algorithm employing … Read more

Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact

In many cases in which one wishes to minimize a complicated or expensive function, it is convenient to employ cheap approximations, at least when the current approximation to the solution is poor. Adequate strategies for deciding the accuracy desired at each stage of optimization are crucial for the global convergence and overall efficiency of the … Read more

A switching cost aware rounding method for relaxations of mixed-integer optimal control problems

This article investigates a class of Mixed-Integer Optimal Control Problems (MIOCPs) with switching costs. We introduce the problem class of Minimal-Switching-Cost Optimal Control Problems (MSCP) with an objective function that consists of two summands, a continuous term depending on the state vector and an encoding of the discrete switching costs. State vectors of Mixed-Integer Optimal … Read more

High-Order Evaluation Complexity for Convexly-Constrained Optimization with Non-Lipschitzian Group Sparsity Terms

This paper studies high-order evaluation complexity for partially separable convexly-constrained optimization involving non-Lipschitzian group sparsity terms in a nonconvex objective function. We propose a partially separable adaptive regularization algorithm using a $p$-th order Taylor model and show that the algorithm can produce an (epsilon,delta)-approximate q-th-order stationary point in at most O(epsilon^{-(p+1)/(p-q+1)}) evaluations of the objective … Read more

Minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most O(|log(epsilon)|.epsilon^{-2}) evaluations of the problem’s functions and their derivatives for finding an $\epsilon$-approximate first-order stationary point. This complexity bound therefore generalizes that provided by [Bellavia, Gurioli, … Read more

Weak subgradient algorithm for solving nonsmooth nonconvex unconstrained optimization problems

This paper presents a weak subgradient based method for solving nonconvex unconstrained optimization problems. The method uses a weak subgradient of the objective function at a current point, to generate a new one at every iteration. The concept of the weak subgradient is based on the idea of using supporting cones to the graph of … Read more