Derivative-free Methods for Mixed-Integer Constrained Optimization Problems

Methods which do not use any derivative information are becoming popular among researchers, since they allow to solve many real-world engineering problems. Such problems are frequently characterized by the presence of discrete variables which can further complicate the optimization process. In this paper, we propose derivative-free algorithms for solving continuously differentiable Mixed Integer NonLinear Programming … Read more

SQP Methods for Parametric Nonlinear Optimization

Sequential quadratic programming (SQP) methods are known to be effi- cient for solving a series of related nonlinear optimization problems because of desirable hot and warm start properties–a solution for one problem is a good estimate of the solution of the next. However, standard SQP solvers contain elements to enforce global convergence that can interfere … Read more

A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework

We propose a new family of subgradient- and gradient-based methods which converges with optimal complexity for convex optimization problems whose feasible region is simple enough. This includes cases where the objective function is non-smooth, smooth, have composite/saddle structure, or are given by an inexact oracle model. We unified the way of constructing the subproblems which … Read more

A derivative-free trust-funnel method for equality-constrained nonlinear optimization

In this work, we look into new derivative-free methods to solve equality-constrained optimization problems. Of particular interest, are the trust-region techniques, which have been investigated for the unconstrained and bound-constrained cases. For solving equality-constrained optimization problems, we introduce a derivative-free adaptation of the trust-funnel method combined with a self-correcting geometry scheme and present some encouraging … Read more

A trust-region derivative-free algorithm for constrained optimization

We propose a trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available. In each iteration, the objective function is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region. Since the constraints are handled … Read more

Problem Formulations for Simulation-based Design Optimization using Statistical Surrogates and Direct Search

Typical challenges of simulation-based design optimization include unavailable gradients and unreliable approximations thereof, expensive function evaluations, numerical noise, multiple local optima and the failure of the analysis to return a value to the optimizer. One possible remedy to alleviate these issues is to use surrogate models in lieu of the computational models or simulations and … Read more

On Calmness of the Argmin Mapping in Parametric Optimization Problems

Recently, Canovas et. al. (2013) presented an interesting result: the argmin mapping of a linear semi-infinite program under canonical perturbations is calm if and only if some associated linear semi-infinite inequality system is calm. Using classical tools from parametric optimization, we show that the if-direction of this condition holds in a much more general framework … Read more

Improving direct search algorithms by multilevel optimization techniques

Direct Search algorithms are classical derivative-free methods for optimization. Though endowed with solid theoretical properties, they are not well suited for large-scale problems due to slow convergence and scaling issues. In this paper, we discuss how such limitations can be circumvented, on problems for which a hierarchy of objective functions is available, by using multilevel … Read more

A Note on Lerner Index, Cross-Elasticity and Revenue Optimization Invariants

We study common properties of retail pricing models in a general framework of calculus of variations. In particular, we observe that for any demand model, optimal de-seasoned revenue rate divided by price elasticity is time invariant. We also obtain a generalization of a well known inverse relationship between price elasticity of demand and Lerner index. … Read more

A Stochastic Quasi-Newton Method for Large-Scale Optimization

Abstract The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasi- Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust … Read more