A note on solving nonlinear optimization problems in variable precision

This short note considers an efficient variant of the trust-region algorithm with dynamic accuracy proposed Carter (1993) and Conn, Gould and Toint (2000) as a tool for very high-performance computing, an area where it is critical to allow multi-precision computations for keeping the energy dissipation under control. Numerical experiments are presented indicating that the use … Read more

Adaptive regularization algorithms with inexact evaluations for nonconvex optimization

A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is beta-H\”{o}lder continuous. It features a … Read more

Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

We provide sharp worst-case evaluation complexity bounds for nonconvex minimization problems with general inexpensive constraints, i.e.\ problems where the cost of evaluating/enforcing of the (possibly nonconvex or even disconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds … Read more

Minimizing convex quadratics with variable precision Krylov methods

Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products. Theoretical bounds on the performance of a Conjugate Gradients and a Full-Orthormalization methods are derived, the necessary quantities occurring in the theoretical bounds estimated and new practical algorithms derived. Numerical experiments suggest that the new methods have significant … Read more

A note on using performance and data profiles for training algorithms

It is shown how to use the performance and data profile benchmarking tools to improve algorithms’ performance. An illustration for the BFO derivative-free optimizer suggests that the obtained gains are potentially significant. Citation ACM Transactions on Mathematical Software, 45:2 (2019), Article 20. Article Download View A note on using performance and data profiles for training … Read more

A note on preconditioning weighted linear least squares, with consequences for weakly-constrained variational data assimilation

The effect of preconditioning linear weighted least-squares using an approximation of the model matrix is analyzed, showing the interplay of the eigenstructures of both the model and weighting matrices. A small example is given illustrating the resulting potential inefficiency of such preconditioners. Consequences of these results in the context of the weakly-constrained 4D-Var data assimilation … Read more

Worst-case evaluation complexity and optimality of second-order methods for nonconvex smooth optimization

We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex optimization from the point of view of worst-case evaluation complexity, improving and generalizing the results of Cartis, Gould and Toint (2010,2011). To this aim, we consider a new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region … Read more

On the use of the saddle formulation in weakly-constrained 4D-VAR data assimilation

This paper discusses the practical use of the saddle variational formulation for the weakly-constrained 4D-VAR method in data assimilation. It is shown that the method, in its original form, may produce erratic results or diverge because of the inherent lack of monotonicity of the produced objective function values. Convergent, variationaly coherent variants of the algorithm … Read more

Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The unconstrained minimization of a sufficiently smooth objective function $f(x)$ is considered, for which derivatives up to order $p$, $p\geq 2$, are assumed to be available. An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order $p$ and that is guaranteed to find a first- and second-order critical point in … Read more

Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization

Necessary conditions for high-order optimality in smooth nonlinear constrained optimization are explored and their inherent intricacy discussed. A two-phase minimization algorithm is proposed which can achieve approximate first-, second- and third-order criticality and its evaluation complexity is analyzed as a function of the choice (among existing methods) of an inner algorithm for solving subproblems in … Read more