Global Search Strategies for Solving Multilinear Least-squares Problems

The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present … Read more

Inexact Restoration method for Derivative-Free Optimization with smooth constraints

A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only … Read more

Sobolev Seminorm of Quadratic Functions with Applications to Derivative-Free Optimization

This paper studies the $H^1$ Sobolev seminorm of quadratic functions. The research is motivated by the least-norm interpolation that is widely used in derivative-free optimization. We express the $H^1$ seminorm of a quadratic function explicitly in terms of the Hessian and the gradient when the underlying domain is a ball. The seminorm gives new insights … Read more

Robust inversion, dimensionality reduction, and randomized sampling

We consider a class of inverse problems in which the forward model is the solution operator to linear ODEs or PDEs. This class admits several dimensionality-reduction techniques based on data averaging or sampling, which are especially useful for large-scale problems. We survey these approaches and their connection to stochastic optimization. The data-averaging approach is only … Read more

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A hybrid HS and PRP type conjugate gradient method for smooth optimization is presented, which reduces to the classical RPR or HS method if exact linear search is used and converges globally and R-linearly for nonconvex functions with an inexact backtracking line search under standard assumption. An inexact version of the proposed method which admits … Read more

Global Error bounds for systems of convex polynomials over polyhedral constraints

This paper is devoted to study the Lipschitzian/Holderian type global error bound for systems of many finitely convex polynomial inequalities over a polyhedral constraint. Firstly, for systems of this type, we show that under a suitable asymtotic qualification condition, the Lipschitzian type global error bound property is equivalent to the Abadie qualification condition, in particular, … Read more

TACO – A Toolkit for AMPL Control Optimization

We describe a set of extensions to the AMPL modeling language to conveniently model mixed-integer optimal control problems for ODE or DAE dynamic processes. These extensions are realized as AMPL user functions and suffixes and do not require intrusive changes to the AMPL language standard or implementation itself. We describe and provide TACO, a Toolkit … Read more

New optimality conditions for the semivectorial bilevel optimization problem

The paper is concerned with the optimistic formulation of a bilevel optimization problem with multiobjective lower-level problem. Considering the scalarization approach for the multiobjective program, we transform our problem into a scalar-objective optimization problem with inequality constraints by means of the well-known optimal value reformulation. Completely detailed rst-order necessary optimality conditions are then derived in … Read more

Sensitivity analysis for two-level value functions with applications to bilevel programming

This paper contributes to a deeper understanding of the link between a now conventional framework in hierarchical optimization spread under the name of the optimistic bilevel problem and its initial more dicult formulation that we call here the original optimistic bilevel optimization problem. It follows from this research that, although the process of deriving necessary … Read more

Sample Size Selection in Optimization Methods for Machine Learning

This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance … Read more