Convergence and Complexity Analysis of a Levenberg-Marquardt Algorithm for Inverse Problems

The Levenberg-Marquardt algorithm is one of the most popular algorithms for finding the solution of nonlinear least squares problems. Across different modified variations of the basic procedure, the algorithm enjoys global convergence, a competitive worst case iteration complexity rate, and a guaranteed rate of local convergence for both zero and nonzero small residual problems, under … Read more

Asynchronous Parallel Algorithms for Nonconvex Big-Data Optimization. Part I: Model and Convergence

We propose a novel asynchronous parallel algorithmic framework for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer, subject to both convex and nonconvex constraints. The proposed framework hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of modern computational architectures and asynchronous … Read more

Asynchronous Parallel Algorithms for Nonconvex Big-Data Optimization. Part II: Complexity and Numerical Results

We present complexity and numerical results for a new asynchronous parallel algorithmic method for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer, subject to both convex and nonconvex constraints. The proposed method hinges on successive convex approximation techniques and a novel probabilistic model that captures key elements of … Read more

A predictor-corrector path-following algorithm for dual-degenerate parametric optimization problems

Most path-following algorithms for tracing a solution path of a parametric nonlinear optimization problem are only certifiably convergent under strong regularity assumptions about the problem functions, in particular, the linear independence of the constraint gradients at the solutions, which implies a unique multiplier solution for every nonlinear program. In this paper we propose and prove … Read more

A Globally Convergent Stabilized SQP Method: Superlinear Convergence

Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that allows convergence to points satisfying certain second-order KKT conditions (SIAM J. Optim., 23(4):1983–2010, 2013). The method is … Read more

AN INEQUALITY-CONSTRAINED SQP METHOD FOR EIGENVALUE OPTIMIZATION

We consider a problem in eigenvalue optimization, in particular find- ing a local minimizer of the spectral abscissa – the value of a parameter that results in the smallest magnitude of the largest real part of the spectrum of a matrix system. This is an important problem for the stabilization of control sys- tems. Many … Read more

SQP Methods for Parametric Nonlinear Optimization

Sequential quadratic programming (SQP) methods are known to be effi- cient for solving a series of related nonlinear optimization problems because of desirable hot and warm start properties–a solution for one problem is a good estimate of the solution of the next. However, standard SQP solvers contain elements to enforce global convergence that can interfere … Read more

A Regularized SQP Method with Convergence to Second-Order Optimal Points

Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that provides a strong connection between augmented Lagrangian methods and stabilized SQP methods. The method … Read more