A globally trust-region LP-Newton method for nonsmooth functions under the Hölder metric subregularity

We describe and analyse a globally convergent algorithm to find a possible nonisolated zero of a piecewise smooth mapping over a polyhedral set, such formulation includes Karush-Kuhn-Tucker (KKT) systems, variational inequalities problems, and generalized Nash equilibrium problems. Our algorithm is based on a modification of the fast locally convergent Linear Programming (LP)-Newton method with a … Read more

Globally convergent Newton-type methods for multiobjective optimization

We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas  the second uses  second-order information of the objective functions with ingredients of the steepest descent method.  One of the key points of our approaches  is to impose … Read more

A simple Newton method for local nonsmooth optimization

Superlinear convergence has been an elusive goal for black-box nonsmooth optimization. Even in the convex case, the subgradient method is very slow, and while some cutting plane algorithms, including traditional bundle methods, are popular in practice, local convergence is still sluggish. Faster variants depend either on problem structure or on analyses that elide sequences of … Read more

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality, and Saddle-Points

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting, and saddle-point avoiding. To handle constrained optimization, we first propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information. To … Read more

Nonmonotone line searches for unconstrained multiobjective optimization problems

In the last two decades, many descent methods for multiobjective optimization problems were proposed. In particular, the steepest descent and the Newton methods were studied for the unconstrained case. In both methods, the search directions are computed by solving convex subproblems, and the stepsizes are obtained by an Armijo-type line search. As a consequence, the … Read more

Semi-Smooth Second-order Type Methods for Composite Convex Programs

The goal of this paper is to study approaches to bridge the gap between first-order and second-order type methods for composite convex programs. Our key observations are: i) Many well-known operator splitting methods, such as forward-backward splitting (FBS) and Douglas-Rachford splitting (DRS), actually define a possibly semi-smooth and monotone fixed-point mapping; ii) The optimal solutions … Read more

A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation

Stochastic approximation techniques play an important role in solving many problems encountered in machine learning or adaptive signal processing. In these contexts, the statistics of the data are often unknown a priori or their direct computation is too intensive, and they have thus to be estimated online from the observed signals. For batch optimization of … Read more

Attraction of Newton method to critical Lagrange multipliers: fully quadratic case

All previously known results concerned with attraction of Newton-type iterations for optimality systems to critical Lagrange multipliers were a posteriori by nature: they were showing that in case of convergence, the dual limit is in a sense unlikely to be noncritical. This paper suggests the first a priori result in this direction, showing that critical … Read more

An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions

We define a new Newton-type method for the solution of constrained systems of equations and analyze in detail its properties. Under suitable conditions, that do not include differentiability or local uniqueness of solutions, the method converges locally quadratically to a solution of the system of equations, thus filling an important gap in the existing theory. … Read more

A quadratically convergent Newton method for vector optimization

We propose a Newton method for solving smooth unconstrained vector optimization problems under partial orders induced by general closed convex pointed cones. The method extends the one proposed by Fliege, Grana Drummond and Svaiter for multicriteria, which in turn is an extension of the classical Newton method for scalar optimization. The steplength is chosen by … Read more