On First and Second Order Optimality Conditions for Abs-Normal NLP

Structured nonsmoothness is widely present in practical optimization. A particularly attractive class of nonsmooth problems, both from a theoretical and from an algorithmic perspective, are optimization problems in so-called abs-normal form as developed by Griewank and Walther. Here we generalize their theory for the unconstrained case to nonsmooth NLPs with equality and inequality constraints in … Read more

Minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most O(|log(epsilon)|.epsilon^{-2}) evaluations of the problem’s functions and their derivatives for finding an $\epsilon$-approximate first-order stationary point. This complexity bound therefore generalizes that provided by [Bellavia, Gurioli, … Read more

Weak subgradient algorithm for solving nonsmooth nonconvex unconstrained optimization problems

This paper presents a weak subgradient based method for solving nonconvex unconstrained optimization problems. The method uses a weak subgradient of the objective function at a current point, to generate a new one at every iteration. The concept of the weak subgradient is based on the idea of using supporting cones to the graph of … Read more

Subdifferentials and SNC property of scalarization functionals with uniform level sets and applications

This paper deals with necessary conditions for minimal solutions of constrained and unconstrained optimization problems with respect to general domination sets by using a well-known nonlinear scalarization functional with uniform level sets (called Gerstewitz’ functional in the literature). The primary objective of this work is to establish revised formulas for basic and singular subdifferentials of … Read more

Active-set Newton methods and partial smoothness

Diverse optimization algorithms correctly identify, in finite time, intrinsic constraints that must be active at optimality. Analogous behavior extends beyond optimization to systems involving partly smooth operators, and in particular to variational inequalities over partly smooth sets. As in classical nonlinear programming, such active-set structure underlies the design of accelerated local algorithms of Newton type. … Read more

Generalized subdifferentials of spectral functions over Euclidean Jordan algebras

This paper is devoted to the study of generalized subdifferentials of spectral functions over Euclidean Jordan algebras. Spectral functions appear often in optimization problems playing the role of “regularizer”, “barrier”, “penalty function” and many others. We provide formulae for the regular, approximate and horizon subdifferentials of spectral functions. In addition, under local lower semicontinuity, we … Read more

Fast Robust Methods for Singular State-Space Models

State-space models are used in a wide range of time series analysis applications. Kalman filtering and smoothing are work-horse algorithms in these settings. While classic algorithms assume Gaussian errors to simplify estimation, recent advances use a broad range of optimization formulations to allow outlier-robust estimation, as well as constraints to capture prior information. Here we … Read more

Local minimizers of semi-algebraic functions

Consider a semi-algebraic function $f\colon\mathbb{R}^n \to {\mathbb{R}},$ which is continuous around a point $\bar{x} \in \mathbb{R}^n.$ Using the so–called {\em tangency variety} of $f$ at $\bar{x},$ we first provide necessary and sufficient conditions for $\bar{x}$ to be a local minimizer of $f,$ and then in the case where $\bar{x}$ is an isolated local minimizer of … Read more

Weak convergence of an extended splitting method for monotone inclusions

In this article, we consider the problem of finding zeros of two-operator monotone inclusions in real Hilbert spaces, and the second operator has been linearly composed. We suggest an extended splitting method: At each iteration, it mainly solves one resolvent for each operator, respectively. For these two resolvents, the involved two scaling factors can be … Read more

Partial smoothness of the numerical radius at matrices whose fields of values are disks

Solutions to optimization problems involving the numerical radius often belong to a special class: the set of matrices having field of values a disk centered at the origin. After illustrating this phenomenon with some examples, we illuminate it by studying matrices around which this set of “disk matrices” is a manifold with respect to which … Read more