AINVk: a Class of Approximate Inverse Preconditioners based on Krylov-subspace methods, for Large Indefinite Linear Systems

We propose a class of preconditioners for symmetric linear systems arising from numerical analysis and nonconvex optimization frameworks. Our preconditioners are specifically suited for large indefinite linear systems and may be obtained as by-product of Krylov-subspace solvers, as well as by applying L-BFGS updates. Moreover, our proposal is also suited for the solution of a … Read more

A Low-Memory Approach For Best-State Estimation Of Hidden Markov Models With Model Error

We present a low-memory approach for the best-state estimate (data assimilation) of hidden Markov models where model error is considered. In particular, our findings apply for the 4D- Var framework. The novelty of our approach resides in the fact that the storage needed by our estimation framework, while including model error, is dramatically reduced from … Read more

Global convergence and the Powell singular function

The Powell singular function was introduced 1962 by M.J.D. Powell as an unconstrained optimization problem. The function is also used as nonlinear least squares problem and system of nonlinear equations. The function is a classic test function included in collections of test problems in optimization as well as an example problem in text books. In … Read more

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

In this paper we present a variant of the conjugate gradient (CG) algorithm in which we invoke a subspace minimization subproblem on each iteration. We call this algorithm CGSO for “conjugate gradient with subspace optimization”. It is related to earlier work by Nemirovsky and Yudin. We apply the algorithm to solve unconstrained strictly convex problems. … Read more

Smoothing SQP Algorithm for Non-Lipschitz Optimization with Complexity Analysis

In this paper, we propose a smoothing sequential quadratic programming (SSQP) algorithm for solving a class of nonsmooth nonconvex, perhaps even non-Lipschitz minimization problems, which has wide applications in statistics and sparse reconstruction. At each step, the SSQP algorithm solves a strongly convex quadratic minimization problem with a diagonal Hessian matrix, which has a simple … Read more

Globally Convergent Evolution Strategies and CMA-ES

In this paper we show how to modify a large class of evolution strategies (ES) to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombine the parents by means of a weighted sum, around which the offsprings are computed by … Read more

Smoothing and Worst Case Complexity for Direct-Search Methods in Non-Smooth Optimization

For smooth objective functions it has been shown that the worst case cost of direct-search methods is of the same order as the one of steepest descent, when measured in number of iterations to achieve a certain threshold of stationarity. Motivated by the lack of such a result in the non-smooth case, we propose, analyze, … Read more

Modifications of the limited-memory BNS method for better satisfaction of previous quasi-Newton conditions

Several modifications of the limited-memory variable metric BNS method for large scale unconstrained optimization are proposed, which consist in corrections (derived from the idea of conjugate directions) of the used di®erence vectors to improve satisfaction of previous quasi-Newton conditions, utilizing information from previous or subsequent iterations. In case of quadratic objective functions, conjugacy of all … Read more

A new family of high order directions for unconstrained optimization inspired by Chebyshev and Shamanskii methods

The 1669-1670 Newton-Raphson’s method is still used to solve equations systems and unconstrained optimization problems. Since this method, some other algorithms inspired by Newton’s have been proposed: in 1839 Chebyshev developped a high order cubical convergence algorithm, and in 1967 Shamanskii proposed an acceleration of Newton’s method. By considering a Newton-type methods as displacement directions, … Read more

Sobolev Seminorm of Quadratic Functions with Applications to Derivative-Free Optimization

This paper studies the $H^1$ Sobolev seminorm of quadratic functions. The research is motivated by the least-norm interpolation that is widely used in derivative-free optimization. We express the $H^1$ seminorm of a quadratic function explicitly in terms of the Hessian and the gradient when the underlying domain is a ball. The seminorm gives new insights … Read more