Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization

Asynchronous distributed methods are a popular way to reduce the communication and synchronization costs of large-scale optimization. Yet, for all their success, little is known about their convergence guarantees in the challenging case of general non-smooth, non-convex objectives, beyond cases where closed-form proximal operator solutions are available. This is all the more surprising since these … Read more

Numerical solution of generalized minimax problems

This contribution contains the description and investigation of four numerical methods for solving generalized minimax problems, which consists in the minimization of functions which are compositions of special smooth convex functions with maxima of smooth functions (the most important problem of this type is the sum of maxima of smooth functions). Section~1 is introductory. In … Read more

New inertial factors of the Krasnoselskii-Mann iteration

In this article, we consider the Krasnosel’ski\u{\i}-Mann iteration for approximating a fixed point of any given non-expansive operator in real Hilbert spaces, and we study an inertial version proposed by Maing\'{e} recently. As a result, we suggest new conditions on the inertial factors to ensure weak convergence. They are free of iterates and depend on … Read more

A Proximal Interior Point Algorithm with Applications to Image Processing

In this article, we introduce a new proximal interior point algorithm (PIPA). This algorithm is able to handle convex optimization problems involving various constraints where the objective function is the sum of a Lipschitz differentiable term and a possibly nonsmooth one. Each iteration of PIPA involves the minimization of a merit function evaluated for decaying … Read more

Trust-region methods for the derivative-free optimization of nonsmooth black-box functions

In this paper we study the minimization of a nonsmooth black-box type function, without assuming any access to derivatives or generalized derivatives and without any knowledge about the analytical origin of the function nonsmoothness. Directional methods have been derived for such problems but to our knowledge no model-based method like a trust-region one has yet … Read more

Potential-based analyses of first-order methods for constrained and composite optimization

We propose potential-based analyses for first-order algorithms applied to constrained and composite minimization problems. We first propose “idealized” frameworks for algorithms in the strongly and non-strongly convex cases and argue based on a potential that methods following the framework achieve the best possible rate. Then we show that the geometric descent (GD) algorithm by Bubeck … Read more

On First and Second Order Optimality Conditions for Abs-Normal NLP

Structured nonsmoothness is widely present in practical optimization. A particularly attractive class of nonsmooth problems, both from a theoretical and from an algorithmic perspective, are optimization problems in so-called abs-normal form as developed by Griewank and Walther. Here we generalize their theory for the unconstrained case to nonsmooth NLPs with equality and inequality constraints in … Read more

Minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most O(|log(epsilon)|.epsilon^{-2}) evaluations of the problem’s functions and their derivatives for finding an $\epsilon$-approximate first-order stationary point. This complexity bound therefore generalizes that provided by [Bellavia, Gurioli, … Read more

Weak subgradient algorithm for solving nonsmooth nonconvex unconstrained optimization problems

This paper presents a weak subgradient based method for solving nonconvex unconstrained optimization problems. The method uses a weak subgradient of the objective function at a current point, to generate a new one at every iteration. The concept of the weak subgradient is based on the idea of using supporting cones to the graph of … Read more

Subdifferentials and SNC property of scalarization functionals with uniform level sets and applications

This paper deals with necessary conditions for minimal solutions of constrained and unconstrained optimization problems with respect to general domination sets by using a well-known nonlinear scalarization functional with uniform level sets (called Gerstewitz’ functional in the literature). The primary objective of this work is to establish revised formulas for basic and singular subdifferentials of … Read more