We provide an overview of a class of iterative convex approximation methods for nonlinear optimization problems with convex-over-nonlinear substructure. These problems are characterized by outer convexities on the one hand, and nonlinear, generally nonconvex, but differentiable functions on the other hand. All methods from this class use only first order derivatives of the nonlinear functions and sequentially solve convex optimization problems. All of them are different generalizations of the classical Gauss-Newton (GN) method. We focus on the smooth constrained case and on three methods to address it: Sequential Convex Programming (SCP), Sequential Convex Quadratic Programming (SCQP), and Sequential Quadratically Constrained Quadratic Programming (SQCQP). While the first two methods were previously known, the last is newly proposed and investigated in this paper. We show under mild assumptions that SCP, SCQP and SQCQP have exactly the same local linear convergence — or divergence — rate. We then discuss the special case in which the solution is fully determined by the active constraints, and show that for this case the KKT conditions are sufficient for local optimality and that SCP, SCQP and SQCQP even converge quadratically. In the context of parameter estimation with symmetric convex loss functions, the possible divergence of the methods can in fact be an advantage that helps them to avoid some undesirable local minima: generalizing existing results, we show that the presented methods converge to a local minimum if and only if this local minimum is stable against a mirroring operation applied to the measurement data of the estimation problem. All results are illustrated by numerical experiments on a tutorial example.
Citation
ESAIM: Proceedings and Surveys, August 2021, Vol. 71, p. 64-88