Complexity bounds for second-order optimality in unconstrained optimization

This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) and Cartis, Gould and Toint (2010) show that at most O(epsilon^{-3}) iterations may have to be performed for finding an iterate which is within epsilon of satisfying second-order optimality conditions. We first show … Read more

Preconditioning and Globalizing Conjugate Gradients in Dual Space for Quadratically Penalized Nonlinear-Least Squares Problems

When solving nonlinear least-squares problems, it is often useful to regularize the problem using a quadratic term, a practice which is especially common in applications arising in inverse calculations. A solution method derived from a trust-region Gauss-Newton algorithm is analyzed for such applications, where, contrary to the standard algorithm, the least-squares subproblem solved at each … Read more

Solving structured nonlinear least-squares and nonlinear feasibility problems with expensive functions

We present an algorithm for nonlinear least-squares and nonlinear feasibility problems, i.e. for systems of nonlinear equations and nonlinear inequalities, which depend on the outcome of expensive functions for which derivatives are assumed to be unavailable. Our algorithm combines derivative-free techniques with filter trust-region methods to keep the number of expensive function evaluations low and … Read more

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtained for derivative-free methods by Vicente … Read more

Efficient preconditioner updates for shifted linear systems

We present a new technique for building effective and low cost preconditioners for sequences of shifted linear systems (A+aI)x=b, where A is symmetric positive definite and a>0. This technique updates a preconditioner for A, available in the form of an LDL’ factorization, by modifying only the nonzero entries of the L factor in such a … Read more

A Non-monotonic Method for Large-scale Nonnegative Least Squares

We present a new algorithm for nonnegative least-squares (NNLS). Our algorithm extends the unconstrained quadratic optimization algorithm of Barzilai and Borwein (BB) (J. Barzilai and J. M. Borwein; Two-Point Step Size Gradient Methods. IMA J. Numerical Analysis; 1988.) to handle nonnegativity constraints. Our extension diff ers in several basic aspects from other constrained BB variants. The … Read more

On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

A class of trust-region methods for large scale bound-constrained systems of nonlinear equations is presented. The methods in this class follow the so called affine-scaling approach and can efficiently handle large scale problems. At each iteration, a suitably scaled region around the current approximate solution is defined and, within such a region, the norm of … Read more

Constrained Dogleg Methods for nonlinear systems with simple bounds

We focus on the numerical solution of medium scale bound-constrained systems of nonlinear equations. In this context, we consider an affine-scaling trust region approach that allows a great flexibility in choosing the scaling matrix used to handle the bounds. The method is based on a dogleg procedure tailored for constrained problems and so, it is … Read more

A sufficiently exact inexact Newton step based on reusing matrix information

Newton’s method is a classical method for solving a nonlinear equation $F(z)=0$. We derive inexact Newton steps that lead to an inexact Newton method, applicable near a solution. The method is based on solving for a particular $F'(z_{k’})$ during $p$ consecutive iterations $k=k’,k’+1,\dots,k’+p-1$. One such $p$-cycle requires $2^p-1$ solves with the matrix $F'(z_{k’})$. If matrix … Read more

Data Fitting and Experimental Design in Dynamical Systems with EASY-FIT ModelDesign

EASY-FIT is an interactive software system to identify parameters and compute optimal designs in explicit model functions, steady-state systems, Laplace transformations, systems of ordinary differential equations, differential algebraic equations, or systems of one-dimensional time dependent partial differential equations with or without algebraic equations. Proceeding from given experimental data, i.e. observation times and measurements, the minimum … Read more