Modifications of the limited-memory BNS method for better satisfaction of previous quasi-Newton conditions

Several modifications of the limited-memory variable metric BNS method for large scale unconstrained optimization are proposed, which consist in corrections (derived from the idea of conjugate directions) of the used di®erence vectors to improve satisfaction of previous quasi-Newton conditions, utilizing information from previous or subsequent iterations. In case of quadratic objective functions, conjugacy of all … Read more

A Family of Newton Methods for Nonsmooth Constrained Systems with Nonisolated Solutions

We propose a new family of Newton-type methods for the solution of constrained systems of equations. Under suitable conditions, that do not include differentiability or local uniqueness of solutions, local, quadratic convergence to a solution of the system of equations can be established. We show that as particular instances of the method we obtain inexact … Read more

A new family of high order directions for unconstrained optimization inspired by Chebyshev and Shamanskii methods

The 1669-1670 Newton-Raphson’s method is still used to solve equations systems and unconstrained optimization problems. Since this method, some other algorithms inspired by Newton’s have been proposed: in 1839 Chebyshev developped a high order cubical convergence algorithm, and in 1967 Shamanskii proposed an acceleration of Newton’s method. By considering a Newton-type methods as displacement directions, … Read more

On the Difficulty of Deciding Asymptotic Stability of Cubic Homogeneous Vector Fields

It is well-known that asymptotic stability (AS) of homogeneous polynomial vector fields of degree one (i.e., linear systems) can be decided in polynomial time e.g. by searching for a quadratic Lyapunov function. Since homogeneous vector fields of even degree can never be AS, the next interesting degree to consider is equal to three. In this … Read more

Global Search Strategies for Solving Multilinear Least-squares Problems

The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present … Read more

Inexact Restoration method for Derivative-Free Optimization with smooth constraints

A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only … Read more

Sobolev Seminorm of Quadratic Functions with Applications to Derivative-Free Optimization

This paper studies the $H^1$ Sobolev seminorm of quadratic functions. The research is motivated by the least-norm interpolation that is widely used in derivative-free optimization. We express the $H^1$ seminorm of a quadratic function explicitly in terms of the Hessian and the gradient when the underlying domain is a ball. The seminorm gives new insights … Read more

Robust inversion, dimensionality reduction, and randomized sampling

We consider a class of inverse problems in which the forward model is the solution operator to linear ODEs or PDEs. This class admits several dimensionality-reduction techniques based on data averaging or sampling, which are especially useful for large-scale problems. We survey these approaches and their connection to stochastic optimization. The data-averaging approach is only … Read more

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A hybrid HS and PRP type conjugate gradient method for smooth optimization is presented, which reduces to the classical RPR or HS method if exact linear search is used and converges globally and R-linearly for nonconvex functions with an inexact backtracking line search under standard assumption. An inexact version of the proposed method which admits … Read more

Global Error bounds for systems of convex polynomials over polyhedral constraints

This paper is devoted to study the Lipschitzian/Holderian type global error bound for systems of many finitely convex polynomial inequalities over a polyhedral constraint. Firstly, for systems of this type, we show that under a suitable asymtotic qualification condition, the Lipschitzian type global error bound property is equivalent to the Abadie qualification condition, in particular, … Read more