Worst case complexity of direct search under convexity

In this paper we prove that the broad class of direct-search methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same global rate or worst case complexity bound of the gradient method for the unconstrained minimization of a convex and smooth function. More precisely, it will be shown that … Read more

A merit function approach for direct search

In this paper it is proposed to equip direct-search methods with a general procedure to minimize an objective function, possibly non-smooth, without using derivatives and subject to constraints on the variables. One aims at considering constraints, most likely nonlinear or non-smooth, for which the derivatives of the corresponding functions are also unavailable. The novelty of … Read more

Convergence of trust-region methods based on probabilistic models

In this paper we consider the use of probabilistic or random models within a classical trust-region framework for optimization of deterministic smooth general nonlinear functions. Our method and setting differs from many stochastic optimization approaches in two principal ways. Firstly, we assume that the value of the function itself can be computed without noise, in … Read more

Globally convergent DC trust-region methods

In this paper, we investigate the use of DC (Difference of Convex functions) models and algorithms in the solution of nonlinear optimization problems by trust-region methods. We consider DC local models for the quadratic model of the objective function used to compute the trust-region step, and apply a primal-dual subgradient method to the solution of … Read more

Efficient Cardinality/Mean-Variance Portfolios

A number of variants of the classical Markowitz mean-variance optimization model for portfolio selection have been investigated to render it more realistic. Recently, it has been studied the imposition of a cardinality constraint, setting an upper bound on the number of active positions taken in the portfolio, in an attempt to improve its performance and … Read more

Globally Convergent Evolution Strategies and CMA-ES

In this paper we show how to modify a large class of evolution strategies (ES) to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombine the parents by means of a weighted sum, around which the offsprings are computed by … Read more

Smoothing and Worst Case Complexity for Direct-Search Methods in Non-Smooth Optimization

For smooth objective functions it has been shown that the worst case cost of direct-search methods is of the same order as the one of steepest descent, when measured in number of iterations to achieve a certain threshold of stationarity. Motivated by the lack of such a result in the non-smooth case, we propose, analyze, … Read more

A surrogate management framework using rigorous trust-regions steps

Surrogate models and heuristics are frequently used in the optimization engineering community as convenient approaches to deal with functions for which evaluations are expensive or noisy, or lack convexity. These methodologies do not typically guarantee any type of convergence under reasonable assumptions and frequently render slow convergence. In this paper we will show how to … Read more

Inexact solution of NLP subproblems in MINLP

In the context of convex mixed-integer nonlinear programming (MINLP), we investigate how the outer approximation method and the generalized Benders decomposition method are affected when the respective NLP subproblems are solved inexactly. We show that the cuts in the corresponding master problems can be changed to incorporate the inexact residuals, still rendering equivalence and finiteness … Read more

On partially sparse recovery

In this paper we consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the l1-norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to the classical problem in Compressed Sensing where the l1-norm of the … Read more