The Generalized Trust Region Subproblem

The \emph{interval bounded generalized trust region subproblem} (GTRS) consists in minimizing a general quadratic objective, $q_0(x) \rightarrow \min$, subject to an upper and lower bounded general quadratic constraint, $\ell \leq q_1(x) \leq u$. This means that there are no definiteness assumptions on either quadratic function. We first study characterizations of optimality for this \emph{implicitly} convex … Read more

Coordinate Search Algorithms in Multilevel Optimization

Many optimization problems of practical interest arise from the discretization of continuous problems. Classical examples can be found in calculus of variations, optimal control and image processing. In recent years a number of strategies have been proposed for the solution of such problems, broadly known as multilevel methods. Inspired by classical multigrid schemes for linear … Read more

Iterative Reweighted Minimization Methods for $ Regularized Unconstrained Nonlinear Programming

In this paper we study general $l_p$ regularized unconstrained minimization problems. In particular, we derive lower bounds for nonzero entries of first- and second-order stationary points, and hence also of local minimizers of the $l_p$ minimization problems. We extend some existing iterative reweighted $l_1$ (IRL1) and $l_2$ (IRL2) minimization methods to solve these problems and … Read more

Variable Neighborhood Search for parameter tuning in Support Vector Machines

As in most Data Mining procedures, how to tune the parameters of a Support Vector Machine (SVM) is a critical, though not sufficiently explored, issue. The default approach is a grid search in the parameter space, which becomes prohibitively time-consuming even when just a few parameters are to be tuned. For this reason, for models … Read more

On the complexity of the steepest-descent with exact linesearches

The worst-case complexity of the steepest-descent algorithm with exact linesearches for unconstrained smooth optimization is analyzed, and it is shown that the number of iterations of this algorithm which may be necessary to find an iterate at which the norm of the objective function’s gradient is less that a prescribed $\epsilon$ is, essentially, a multiple … Read more

Linearizing the Method of Conjugate Gradients

The method of conjugate gradients (CG) is widely used for the iterative solution of large sparse systems of equations $Ax=b$, where $A\in\Re^{n\times n}$ is symmetric positive definite. Let $x_k$ denote the $k$–th iterate of CG. In this paper we obtain an expression for $J_k$, the Jacobian matrix of $x_k$ with respect to $b$. We use … Read more

How much patience do you have? A worst-case perspective on smooth nonconvex optimization

The paper presents a survey of recent results in the field of worst-case complexity of algorithms for nonlinear (and possibly nonconvex) smooth optimization. Both constrained and unconstrained case are considered. Article Download View How much patience do you have? A worst-case perspective on smooth nonconvex optimization

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this … Read more

AINVk: a Class of Approximate Inverse Preconditioners based on Krylov-subspace methods, for Large Indefinite Linear Systems

We propose a class of preconditioners for symmetric linear systems arising from numerical analysis and nonconvex optimization frameworks. Our preconditioners are specifically suited for large indefinite linear systems and may be obtained as by-product of Krylov-subspace solvers, as well as by applying L-BFGS updates. Moreover, our proposal is also suited for the solution of a … Read more

A Low-Memory Approach For Best-State Estimation Of Hidden Markov Models With Model Error

We present a low-memory approach for the best-state estimate (data assimilation) of hidden Markov models where model error is considered. In particular, our findings apply for the 4D- Var framework. The novelty of our approach resides in the fact that the storage needed by our estimation framework, while including model error, is dramatically reduced from … Read more