The divergence of the BFGS and the Gauss Newton Methods

We present examples of divergence for the BFGS and Gauss Newton methods. These examples have objective functions with bounded level sets and other properties concerning the examples published recently in this journal, like unit steps and convexity along the search lines. As these other examples, the iterates, function values and gradients in the new examples … Read more

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

We propose an efficient computational method for linearly constrained quadratic optimization problems (QOPs) with complementarity constraints based on their Lagrangian and doubly nonnegative (DNN) relaxation and first-order algorithms. The simplified Lagrangian-CPP relaxation of such QOPs proposed by Arima, Kim, and Kojima in 2012 takes one of the simplest forms, an unconstrained conic linear optimization problem … Read more

Accelerated Gradient Methods for Nonconvex Nonlinear and Stochastic Programming

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order … Read more

Constant rank constraint qualifications: a geometric introduction

Constraint qualifications (CQ) are assumptions on the algebraic description of the feasible set of an optimization problem that ensure that the KKT conditions hold at any local minimum. In this work we show that constraint qualifications based on the notion of constant rank can be understood as assumptions that ensure that the polar of the … Read more

Complementarity Formulations of l0-norm Optimization Problems

In a number of application areas, it is desirable to obtain sparse solutions. Minimizing the number of nonzeroes of the solution (its l0-norm) is a difficult nonconvex optimization problem, and is often approximated by the convex problem of minimizing the l1-norm. In contrast, we consider exact formulations as mathematical programs with complementarity constraints and their … Read more

A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing

We present a new recovery analysis for a standard compressed sensing algorithm, Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2008), which considers the fixed points of the algorithm. In the context of arbitrary measurement matrices, we derive a sufficient condition for convergence of IHT to a fixed point and a necessary condition for the existence … Read more

Rounding on the standard simplex: regular grids for global optimization

Given a point on the standard simplex, we calculate a proximal point on the regular grid which is closest with respect to any norm in a large class, including all $\ell^p$-norms for $p\ge 1$. We show that the minimal $\ell^p$-distance to the regular grid on the standard simplex can exceed one, even for very fine … Read more

On the use of iterative methods in cubic regularization for unconstrained optimization

In this paper we consider the problem of minimizing a smooth function by using the Adaptive Cubic Regularized framework (ARC). We focus on the computation of the trial step as a suitable approximate minimizer of the cubic model and discuss the use of matrix-free iterative methods. Our approach is alternative to the implementation proposed in … Read more

Trust-Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

The trust-region problem, which minimizes a nonconvex quadratic function over a ball, is a key subproblem in trust-region methods for solving nonlinear optimization problems. It enjoys many attractive properties such as an exact semi-definite linear programming relaxation (SDP-relaxation) and strong duality. Unfortunately, such properties do not, in general, hold for an extended trust-region problem having … Read more

An Inexact Successive Quadratic Approximation Method for Convex L-1 Regularized Optimization

We study a Newton-like method for the minimization of an objective function $\phi$ that is the sum of a smooth convex function and an $\ell_1$ regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model $q_k$ of the objective … Read more