Admissibility of solution estimators for stochastic optimization

We look at stochastic optimization problems through the lens of statistical decision theory. In particular, we address admissibility, in the statistical decision theory sense, of the natural sample average estimator for a stochastic optimization problem (which is also known as the empirical risk minimization (ERM) rule in learning literature). It is well known that for … Read more

Hybrid Rebalancing with Dynamic Hubbing for Free-floating Bike Sharing Using Multi-objective Simulation Optimization

For rebalancing problem of free-floating bike sharing systems, we propose dynamic hubbing (i.e. dynamically determining geofencing areas) and hybrid rebalancing (combining user-based and operator-based strategies) and solve the problem with a novel multi-objective simulation optimization approach. Given historical usage data and real-time bike GPS location information, dynamic geofenced areas (hubs) are determined to encourage users … Read more

Discrete Approximation Scheme in Distributionally Robust Optimization

Discrete approximation which is the prevailing scheme in stochastic programming in the past decade has been extended to distributionally robust optimization (DRO) recently. In this paper we conduct rigorous quantitative stability analysis of discrete approximation schemes for DRO, which measures the approximation error in terms of discretization sample size. For the ambiguity set defined through … Read more

An Augmented Lagrangian Method for Quasi-Equilibrium Problems

In this paper, we propose an Augmented Lagrangian algorithm for solving a general class of possible non-convex problems called quasi-equilibrium problems (QEPs). We define an Augmented Lagrangian bifunction associated with QEPs, introduce a secondary QEP as a measure of infeasibility and we discuss several special classes of QEPs within our theoretical framework. For obtaining global … Read more

A Status Report on Conflict Analysis in Mixed Integer Nonlinear Programming

Mixed integer nonlinear programs (MINLPs) are arguably among the hardest optimization problems, with a wide range of applications. MINLP solvers that are based on linear relaxations and spatial branching work similar as mixed integer programming (MIP) solvers in the sense that they are based on a branch-and-cut algorithm, enhanced by various heuristics, domain propagation, and … Read more

Local Rapid Learning for Integer Programs

Conflict learning algorithms are an important component of modern MIP and CP solvers. But strong conflict information is typically gained by depth-first search. While this is the natural mode for CP solving, it is not for MIP solving. Rapid Learning is a hybrid CP/MIP approach where CP search is applied at the root to learn … Read more

Towards an efficient Augmented Lagrangian method for convex quadratic programming

Interior point methods have attracted most of the attention in the recent decades for solving large scale convex quadratic programming problems. In this paper we take a different route as we present an augmented Lagrangian method for convex quadratic programming based on recent developments for nonlinear programming. In our approach, box constraints are penalized while … Read more

Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems subject to convex constraints

Given a twice-continuously differentiable vector-valued function $r(x)$, a local minimizer of $\|r(x)\|_2$ within a convex set is sought. We propose and analyse tensor-Newton methods, in which $r(x)$ is replaced locally by its second-order Taylor approximation. Convergence is controlled by regularization of various orders. We establish global convergence to a constrained first-order critical point of $\|r(x)\|_2$, … Read more

Convexification of polynomial optimization problems by means of monomial patterns

Convexification is a core technique in global polynomial optimization. Currently, two different approaches compete in practice and in the literature. First, general approaches rooted in nonlinear programming. They are comparitively cheap from a computational point of view, but typically do not provide good (tight) relaxations with respect to bounds for the original problem. Second, approaches … Read more

Application of outer approximation to forecasting losses and scenarios in the target of portfolios with high of nonlinear risk

The purpose of this paper is to find appropriate solutions to concave quadratic programming using outer approximation algorithm, which is one of the algorithm of global optimization, in the target of the strong of concavity of object function i.e. high of nonlinear risk of portfolio. Firstly, my target model is a mathematical optimization programming to … Read more