Beyond symmetric Broyden for updating quadratic models in minimization without derivatives

Some highly successful algorithms for unconstrained minimization without derivatives construct changes to the variables by applying trust region methods to quadratic approximations to the objective function F(x), x in R^n. A quadratic model has (n+1)(n+2)/2 independent parameters, but each new model may interpolate only 2n+1 values of F, for instance. The symmetric Broyden method takes … Read more

A Retrospective Filter Trust Region Algorithm For Unconstrained Optimization

In this paper, we propose a retrospective filter trust region algorithm for unconstrained optimization, which is based on the framework of the retrospective trust region method and associated with the technique of the multi dimensional filter. The new algorithm gives a good estimation of trust region radius, relaxes the condition of accepting a trial step … Read more

On the convergence of a wide range of trust region methods for unconstrained optimization

We consider trust region methods for seeking the unconstrained minimum of an objective function F(x), x being the vector of variables, when the gradient grad F is available. The methods are iterative with a starting point x_1 being given. The new vector of variables x_(k+1) is derived from a quadratic approximation to F that interpolates … Read more

A three-term conjugate gradient method with sufficient descent property for unconstrained optimization

Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. … Read more

A Proximal Algorithm with Quasi Distance. Application to Habit’s Formation

We consider a proximal algorithm with quasi distance applied to nonconvex and nonsmooth functions involving analytic properties for an unconstrained minimization problem. We show the behavioral importance of this proximal point model for habit’s formation in Decision and Making Sciences. ArticleDownload View PDF

A Limited Memory Steepest Descent Method

The possibilities inherent in steepest descent methods have been considerably amplified by the introduction of the Barzilai-Borwein choice of step-size, and other related ideas. These methods have proved to be competitive with conjugate gradient methods for the minimization of large dimension unconstrained minimization problems. This paper suggests a method which is able to take advantage … Read more

Using approximate secant equations in limited memory methods for multilevel unconstrained optimization

The properties of multilevel optimization problems defined on a hierarchy of discretization grids can be used to define approximate secant equations, which describe the second-order behaviour of the objective function. Following earlier work by Gratton and Toint (2009), we introduce a quasi-Newton method (with a linesearch) and a nonlinear conjugate gradient method that both take … Read more

Alternating direction algorithms for total variation deconvolution in image reconstruction

Image restoration and reconstruction from blurry and noisy observation is known to be ill-posed. To stabilize the recovery, total variation (TV) regularization was introduced by Rudin, Osher and Fatemi in \cite{LIR92}, which has demonstrated superiority in preserving image edges. However, the nondifferentiability of TV makes the underlying optimization problems difficult to solve. In this paper, … Read more

On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization

It is shown that the steepest descent and Newton’s method for unconstrained nonconvex optimization under standard assumptions may be both require a number of iterations and function evaluations arbitrarily close to O(epsilon^{-2}) to drive the norm of the gradient below epsilon. This shows that the upper bound of O(epsilon^{-2}) evaluations known for the steepest descent … Read more

A globally convergent modified conjugate-gradient line-search algorithm with inertia controlling

In this paper we have addressed the problem of unboundedness in the search direction when the Hessian is indefinite or near singular. A new algorithm has been proposed which naturally handles singular Hessian matrices, and is theoretically equivalent to the trust-region approach. This is accomplished by performing explicit matrix modifications adaptively that mimic the implicit … Read more