A surrogate management framework using rigorous trust-regions steps

Surrogate models and heuristics are frequently used in the optimization engineering community as convenient approaches to deal with functions for which evaluations are expensive or noisy, or lack convexity. These methodologies do not typically guarantee any type of convergence under reasonable assumptions and frequently render slow convergence. In this paper we will show how to … Read more

Second-Order-Cone Constraints for Extended Trust-Region Subproblems

The classical trust-region subproblem (TRS) minimizes a nonconvex quadratic objective over the unit ball. In this paper, we consider extensions of TRS having extra constraints. When two parallel cuts are added to TRS, we show that the resulting nonconvex problem has an exact representation as a semidefinite program with additional linear and second-order-cone constraints. For … Read more

On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds

We introduce an inexact Gauss-Newton trust-region method for solving bound-constrained nonlinear least-squares problems where, at each iteration, a trust-region subproblem is approximately solved by the Conjugate Gradient method. Provided a suitable control on the accuracy to which we attempt to solve the subproblems, we prove that the method has global and asymptotic fast convergence properties. … Read more

Derivative-free Optimization of Expensive Functions with Computational Error Using Weighted Regression

We propose a derivative-free algorithm for optimizing computationally expensive functions with computational error. The algorithm is based on the trust region regression method by Conn, Scheinberg, and Vicente [4], but uses weighted regression to obtain more accurate model functions at each trust region iteration. A heuristic weighting scheme is proposed which simultaneously handles i) differing … Read more

On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming

We estimate the worst-case complexity of minimizing an unconstrained, nonconvex composite objective with a structured nonsmooth term by means of some first-order methods. We find that it is unaffected by the nonsmoothness of the objective in that a first-order trust-region or quadratic regularization method applied to it takes at most O($\epsilon^{-2}$) function-evaluations to reduce the … Read more

On the convergence of trust region algorithms for unconstrained minimization without derivatives

We consider iterative trust region algorithms for the unconstrained minimization of an objective function F(x) of n variables, when F is differentiable but no derivatives are available, and when each model of F is a linear or quadratic polynomial. The models interpolate F at n+1 points, which defines them uniquely when they are linear polynomials. … Read more

Preconditioning and Globalizing Conjugate Gradients in Dual Space for Quadratically Penalized Nonlinear-Least Squares Problems

When solving nonlinear least-squares problems, it is often useful to regularize the problem using a quadratic term, a practice which is especially common in applications arising in inverse calculations. A solution method derived from a trust-region Gauss-Newton algorithm is analyzed for such applications, where, contrary to the standard algorithm, the least-squares subproblem solved at each … Read more

Solving structured nonlinear least-squares and nonlinear feasibility problems with expensive functions

We present an algorithm for nonlinear least-squares and nonlinear feasibility problems, i.e. for systems of nonlinear equations and nonlinear inequalities, which depend on the outcome of expensive functions for which derivatives are assumed to be unavailable. Our algorithm combines derivative-free techniques with filter trust-region methods to keep the number of expensive function evaluations low and … Read more

Efficient Block-coordinate Descent Algorithms for the Group Lasso

We present two algorithms to solve the Group Lasso problem [Yuan & Lin]. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem. We show that it exhibits excellent performance when the groups are of moderate sizes. For large … Read more

A Retrospective Filter Trust Region Algorithm For Unconstrained Optimization

In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step … Read more