A class of derivative-free nonmonotone optimization algorithms employing coordinate rotations and gradient approximations

In this paper we study a class of derivative-free unconstrained minimization algorithms employing nonmonotone inexact linesearch techniques along a set of suitable search directions. In particular, we define globally convergent nonmonotone versions of some well-known derivative-free methods and we propose a new algorithm combining coordinate rotations with approximate simplex gradients. Through extensive numerical experimentation, we … Read more

On the connection between the conjugate gradient method and quasi-Newton methods on quadratic problems

It is well known that the conjugate gradient method and a quasi-Newton method, using any well-defined update matrix from the one-parameter Broyden family of updates, produce identical iterates on a quadratic problem with positive-definite Hessian. This equivalence does not hold for any quasi-Newton method. We define precisely the conditions on the update matrix in the … Read more

The Generalized Trust Region Subproblem

The \emph{interval bounded generalized trust region subproblem} (GTRS) consists in minimizing a general quadratic objective, $q_0(x) \rightarrow \min$, subject to an upper and lower bounded general quadratic constraint, $\ell \leq q_1(x) \leq u$. This means that there are no definiteness assumptions on either quadratic function. We first study characterizations of optimality for this \emph{implicitly} convex … Read more

Coordinate Search Algorithms in Multilevel Optimization

Many optimization problems of practical interest arise from the discretization of continuous problems. Classical examples can be found in calculus of variations, optimal control and image processing. In recent years a number of strategies have been proposed for the solution of such problems, broadly known as multilevel methods. Inspired by classical multigrid schemes for linear … Read more

Iterative Reweighted Minimization Methods for $ Regularized Unconstrained Nonlinear Programming

In this paper we study general $l_p$ regularized unconstrained minimization problems. In particular, we derive lower bounds for nonzero entries of first- and second-order stationary points, and hence also of local minimizers of the $l_p$ minimization problems. We extend some existing iterative reweighted $l_1$ (IRL1) and $l_2$ (IRL2) minimization methods to solve these problems and … Read more

Variable Neighborhood Search for parameter tuning in Support Vector Machines

As in most Data Mining procedures, how to tune the parameters of a Support Vector Machine (SVM) is a critical, though not sufficiently explored, issue. The default approach is a grid search in the parameter space, which becomes prohibitively time-consuming even when just a few parameters are to be tuned. For this reason, for models … Read more

On the complexity of the steepest-descent with exact linesearches

The worst-case complexity of the steepest-descent algorithm with exact linesearches for unconstrained smooth optimization is analyzed, and it is shown that the number of iterations of this algorithm which may be necessary to find an iterate at which the norm of the objective function’s gradient is less that a prescribed $\epsilon$ is, essentially, a multiple … Read more

Linearizing the Method of Conjugate Gradients

The method of conjugate gradients (CG) is widely used for the iterative solution of large sparse systems of equations $Ax=b$, where $A\in\Re^{n\times n}$ is symmetric positive definite. Let $x_k$ denote the $k$–th iterate of CG. In this paper we obtain an expression for $J_k$, the Jacobian matrix of $x_k$ with respect to $b$. We use … Read more

How much patience do you have? A worst-case perspective on smooth nonconvex optimization

The paper presents a survey of recent results in the field of worst-case complexity of algorithms for nonlinear (and possibly nonconvex) smooth optimization. Both constrained and unconstrained case are considered. ArticleDownload View PDF

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this … Read more