An Inexact Spingarn’s Partial Inverse Method with Applications to Operator Splitting and Composite Optimization

We propose and study the iteration-complexity of an inexact version of the Spingarn’s partial inverse method. Its complexity analysis is performed by viewing it in the framework of the hybrid proximal extragradient (HPE) method, for which pointwise and ergodic iteration-complexity has been established recently by Monteiro and Svaiter. As applications, we propose and analyze the … Read more

A proximal-Newton method for unconstrained convex optimization in Hilbert spaces

We propose and study the iteration-complexity of a proximal-Newton method for finding approximate solutions of the problem of minimizing a twice continuously differentiable convex function on a (possibly infinite dimensional) Hilbert space. We prove global convergence rates for obtaining approximate solutions in terms of function/gradient values. Our main results follow from an iteration-complexity study of … Read more

On the local convergence analysis of the Gradient Sampling method

The Gradient Sampling method is a recently developed tool for solving unconstrained nonsmooth optimization problems. Using just first order information about the objective function, it generalizes the steepest descent method, one of the most classical methods to minimize a smooth function. This manuscript aims at determining under which circumstances one can expect the same local … Read more

A general double-proximal gradient algorithm for d.c. programming

The possibilities of exploiting the special structure of d.c. programs, which consist of optimizing the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are … Read more

Relatively-Smooth Convex Optimization by First-Order Methods, and Applications

The usual approach to developing and analyzing first-order methods for smooth convex optimization assumes that the gradient of the objective function is uniformly smooth with some Lipschitz constant L. However, in many settings the differentiable convex function f(.) is not uniformly smooth — for example in D-optimal design where f(x):=-ln det(HXH^T), or even the univariate … Read more

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices … Read more

Moment methods in energy minimization: New bounds for Riesz minimal energy problems

We use moment methods to construct a converging hierarchy of optimization problems to lower bound the ground state energy of interacting particle systems. We approximate the infinite dimensional optimization problems in this hierarchy by block diagonal semidefinite programs. For this we develop the necessary harmonic analysis for spaces consisting of subsets of another space, and … Read more

Variational Geometric Approach to Generalized Differential and Fenchel Conjugate Calculi in Convex Analysis

This paper develops a geometric approach of variational analysis for the case of convex objects considered in locally convex topological spaces and also in Banach space settings. Besides deriving in this way new results of convex calculus, we present an overview of some known achievements with their uni ed and simplified proofs based on the developed … Read more

Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a … Read more

Exact and Inexact Subsampled Newton Methods for Optimization

The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling. We first consider Newton-like methods that employ these approximations and discuss how to coordinate the accuracy in the gradient and Hessian to yield a superlinear rate of convergence in expectation. The second part of … Read more