The Fermat Rule for Set Optimization Problems with Lipschitzian Set-Valued Mappings

n this paper, we consider set optimization problems with respect to the set approach. Specifically, we deal with the lower less and the upper less set relations. First, we derive properties of convexity and Lipschitzianity of suitable scalarizing functionals, under the same assumption on the set-valued objective mapping. We then obtain upper estimates of the … Read more

Alternative DC Algorithm for Partial DC programming

In this paper, we introduce an alternative DC algorithm for solving partial DC programs. This proposed algorithm is an natural extension of the standard DC algorithm. Furthermore, we also consider an inexact version of this alternative DC algorithm. The convergence of these proposed algorithms (both the exact and inexact versions) are investigated. The applications to … Read more

The Proximal Alternating Minimization Algorithm for two-block separable convex optimization problems with linear constraints

The Alternating Minimization Algorithm (AMA) has been proposed by Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of AMA does … Read more

Stochastic subgradient method converges on tame functions

This work considers the question: what convergence guarantees does the stochastic subgradient method have in the absence of smoothness and convexity? We prove that the stochastic subgradient method, on any semialgebraic locally Lipschitz function, produces limit points that are all first-order stationary. More generally, our result applies to any function with a Whitney stratifiable graph. … Read more

The nonsmooth landscape of phase retrieval

We consider a popular nonsmooth formulation of the real phase retrieval problem. We show that under standard statistical assumptions, a simple subgradient method converges linearly when initialized within a constant relative distance of an optimal solution. Seeking to understand the distribution of the stationary points of the problem, we complete the paper by proving that … Read more

ADMM for monotone operators: convergence analysis and rates

We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primal-dual algorithms for monotone inclusions and also the classical ADMM numerical scheme for convex optimization problems, … Read more

Fixing and extending some recent results on the ADMM algorithm

We first point out several flaws in the recent paper {\it [R. Shefi, M. Teboulle: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization, SIAM J. Optim. 24, 269–297, 2014]} that proposes two ADMM-type algorithms for solving convex optimization problems involving compositions with linear operators and show … Read more

Error bounds, quadratic growth, and linear convergence of proximal methods

We show that the the error bound property, postulating that the step lengths of the proximal gradient method linearly bound the distance to the solution set, is equivalent to a standard quadratic growth condition. We exploit this equivalence in an analysis of asymptotic linear convergence of the proximal gradient algorithm for structured problems, which lack … Read more

Perturbation of error bounds

Our aim in the current article is to extend the developments in Kruger, Ngai & Th\’era, SIAM J. Optim. 20(6), 3280–3296 (2010) and, more precisely, to characterize, in the Banach space setting, the stability of the local and global error bound property of inequalities determined by proper lower semicontinuous under data perturbations. We propose new … Read more

Variational analysis of spectral functions simplified

Spectral functions of symmetric matrices — those depending on matrices only through their eigenvalues — appear often in optimization. A cornerstone variational analytic tool for studying such functions is a formula relating their subdifferentials to the subdifferentials of their diagonal restrictions. This paper presents a new, short, and revealing derivation of this result. We then … Read more