Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience

In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp.201–245.]. … Read more

Block stochastic gradient iteration for convex and nonconvex optimization

The stochastic gradient (SG) method can minimize an objective function composed of a large number of differentiable functions, or solve a stochastic optimization problem, to a moderate accuracy. The block coordinate descent/update (BCD) method, on the other hand, handles problems with multiple blocks of variables by updating them one at a time; when the blocks … Read more

A proximal point algorithm for DC functions on Hadamard manifolds

An extension of a proximal point algorithm for difference of two convex functions is presented in the context of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bounded it is proved that every cluster point is a critical point of the function (not necessarily convex) under consideration, even if … Read more

iPiano: Inertial Proximal Algorithm for Nonconvex Optimization

In this paper we study an algorithm for solving a minimization problem composed of a differentiable (possibly nonconvex) and a convex (possibly nondifferentiable) function. The algorithm iPiano combines forward-backward splitting with an inertial force. It can be seen as a nonsmooth split version of the Heavy-ball method from Polyak. A rigorous analysis of the algorithm … Read more

On the Minimization Over Sparse Symmetric Sets: Projections, Optimality Conditions and Algorithms

We consider the problem of minimizing a general continuously differentiable function over symmetric sets under sparsity constraints. These type of problems are generally hard to solve as the sparsity constraint induces a combinatorial constraint into the problem, rendering the feasible set to be nonconvex. We begin with a study of the properties of the orthogonal … Read more

A Quasi-Newton Algorithm for Nonconvex, Nonsmooth Optimization with Global Convergence Guarantees

A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy … Read more

Copositive relaxation beats Lagrangian dual bounds in quadratically and linearly constrained QPs

We study non-convex quadratic minimization problems under (possibly non-convex) quadratic and linear constraints, and characterize both Lagrangian and Semi-Lagrangian dual bounds in terms of conic optimization. While the Lagrangian dual is equivalent to the SDP relaxation (which has been known for quite a while, although the presented form, incorporating explicitly linear constraints, seems to be … Read more

Narrowing the difficulty gap for the Celis-Dennis-Tapia problem

We study the {\em Celis-Dennis-Tapia (CDT) problem}: minimize a non-convex quadratic function over the intersection of two ellipsoids. In contrast to the well-studied trust region problem where the feasible set is just one ellipsoid, the CDT problem is not yet fully understood. Our main objective in this paper is to narrow the difficulty gap that … Read more

Accelerated Gradient Methods for Nonconvex Nonlinear and Stochastic Programming

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order … Read more

A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing

We present a new recovery analysis for a standard compressed sensing algorithm, Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2008), which considers the fixed points of the algorithm. In the context of arbitrary measurement matrices, we derive a sufficient condition for convergence of IHT to a fixed point and a necessary condition for the existence … Read more