First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems

We focus on nonconvex and nonsmooth minimization problems with a composite objective, where the differentiable part of the objective is freed from the usual and restrictive global Lipschitz gradient continuity assumption. This longstanding smoothness restriction is pervasive in first order methods (FOM), and was recently circumvent for convex composite optimization by Bauschke, Bolte and Teboulle, … Read more

From error bounds to the complexity of first-order descent methods for convex functions

This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex … Read more

Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs

In view of solving nonsmooth and nonconvex problems involving complex constraints (like standard NLP problems), we study general maximization-minimization procedures produced by families of strongly convex sub-problems. Using techniques from semi-algebraic geometry and variational analysis –in particular Lojasiewicz inequality– we establish the convergence of sequences generated by this type of schemes to critical points. The … Read more

Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods

In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka-Lojasiewicz inequality. This assumption allows to cover a … Read more

Asymptotic expansions for interior penalty solutions of control constrained linear-quadratic problems

We consider a quadratic optimal control problem governed by a nonautonomous affine differential equation subject to nonnegativity control constraints. For a general class of interior penalty functions, we show how to compute the principal term of the pointwise expansion of the state and the adjoint state. Our main argument relies on the following fact: If … Read more

Generic identifiability and second-order sufficiency in tame convex optimization

We consider linear optimization over a fixed compact convex feasible region that is semi-algebraic (or, more generally, “tame”). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region is “partly smooth”, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, second-order optimality conditions … Read more