Necessary conditions for local optimality in d.c. programming

Using $\eps$-subdifferential calculus for difference-of-convex (d.c.) programming, D\”ur proposed a condition sufficient for local optimality, and showed that this condition is not necessary in general. Here it is proved that whenever the convex part is strongly convex, this condition is also necessary. Strong convexity can always be ensured by changing the given d.c. decomposition slightly. … Read more

SESOP-TN: Combining Sequential Subspace Optimization with Truncated Newton method

SESOP-TN is a method for very large scale unconstrained optimization of smooth functions. It combines ideas of Sequential Subspace Optimization (SESOP) [Narkiss-Zibulevsky-2005] with those of the Truncated Newton (TN) method . Replacing TN line search with subspace optimization, we allow Conjugate Gradient (CG) iterations to stay matched through consequent TN steps. This resolves the problem … Read more

A nonmonotone truncated Newton-Krylov method exploiting negative curvature directions, for large scale unconstrained optimization: complete results

We propose a new truncated Newton method for large scale unconstrained optimization, where a Conjugate Gradient (CG)-based technique is adopted to solve Newton’s equation. In the current iteration, the Krylov method computes a pair of search directions: the first approximates the Newton step of the quadratic convex model, while the second is a suitable negative … Read more

Descent heuristics for unconstrained minimization

Semidefinite relaxations often provide excellent starting points for nonconvex problems with multiple local minimizers. This work aims to find a local minimizer within a certain neighborhood of the starting point and with a small objective value. Several approaches are motivated and compared with each other. CitationReport, Mathematisches Institut, Universitaet Duesseldorf, August 2008.ArticleDownload View PDF

Optimal steepest descent algorithms for unconstrained convex problems: fine tuning Nesterov’s method

We modify the first order algorithm for convex programming proposed by Nesterov. The resulting algorithm keeps the optimal complexity obtained by Nesterov with no need of a known Lipschitz constant for the gradient, and performs better in practically all examples in a set of test problems. CitationTechnical Report, Federal University of Santa Catarina, 2008.ArticleDownload View … Read more

A stochastic algorithm for function minimization

Focusing on what an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. Its principle is discussed in the theoretical model of DSZ algorithm, from which we present a practical … Read more

ORBIT: Optimization by Radial Basis Function Interpolation in Trust-Regions

We present a new derivative-free algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trust-region framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by … Read more

Extended Barzilai-Borwein method for unconstrained minimization problems

In 1988, Barzilai and Borwein presented a new choice of step size for the gradient method for solving unconstrained minimization problems. Their method aimed to accelerate the convergence of the steepest descent method. The Barzilai-Borwein method requires few storage locations and inexpensive computations. Therefore, several authors have paid attention to the Barzilai-Borwein method and have … Read more

Accelerated line-search and trust-region methods

In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. Here we consider “accelerated” versions of these methods, where the conventional iterate is allowed to be replaced by any point that produces at least as much decrease in the cost function as a fixed fraction of … Read more

A subspace minimization method for the trust-region step

We consider methods for large-scale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a two-norm trust-region constraint. The Steihaug-Toint method uses the conjugate-gradient (CG) algorithm to minimize the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. … Read more