Moment methods in energy minimization: New bounds for Riesz minimal energy problems

We use moment methods to construct a converging hierarchy of optimization problems to lower bound the ground state energy of interacting particle systems. We approximate the infinite dimensional optimization problems in this hierarchy by block diagonal semidefinite programs. For this we develop the necessary harmonic analysis for spaces consisting of subsets of another space, and … Read more

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices … Read more

Variational Geometric Approach to Generalized Differential and Fenchel Conjugate Calculi in Convex Analysis

This paper develops a geometric approach of variational analysis for the case of convex objects considered in locally convex topological spaces and also in Banach space settings. Besides deriving in this way new results of convex calculus, we present an overview of some known achievements with their uni ed and simplified proofs based on the developed … Read more

Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a … Read more

Exact and Inexact Subsampled Newton Methods for Optimization

The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling. We first consider Newton-like methods that employ these approximations and discuss how to coordinate the accuracy in the gradient and Hessian to yield a superlinear rate of convergence in expectation. The second part of … Read more

Positive and Z-operators on closed convex cones

Let K be a closed convex cone with dual K-star in a finite-dimensional real Hilbert space V. A positive operator on K is a linear operator L on V such that L(K) is a subset of K. Positive operators generalize the nonnegative matrices and are essential to the Perron-Frobenius theory. We say that L is … Read more

Max-Norm Optimization for Robust Matrix Recovery

This paper studies the matrix completion problem under arbitrary sampling schemes. We propose a new estimator incorporating both max-norm and nuclear-norm regularization, based on which we can conduct efficient low-rank matrix recovery using a random subset of entries observed with additive noise under general non-uniform and unknown sampling distributions. This method significantly relaxes the uniform … Read more

A recursive semi-smooth Newton method for linear complementarity problems

A primal feasible active set method is presented for finding the unique solution of a Linear Complementarity Problem (LCP) with a P-matrix, which extends the globally convergent active set method for strictly convex quadratic problems with simple bounds proposed by [P. Hungerlaender and F. Rendl. A feasible active set method for strictly convex problems with … Read more

An Infeasible Active Set Method with Combinatorial Line Search for Convex Quadratic Problems with Bound Constraints

The minimization of a convex quadratic function under bound constraints is a fundamental building block for more complicated optimization problems. The active-set method introduced by [M. Bergounioux, K. Ito, and K. Kunisch. Primal-Dual Strategy for Constrained Optimal Control Problems. SIAM Journal on Control and Optimization, 37:1176–1194, 1999.] and [M. Bergounioux, M. Haddou, M. Hintermüller, and … Read more

A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning

Machine learning theory typically assumes that training data is unbiased and not adversarially generated. When real training data deviates from these assumptions, trained models make erroneous predictions, sometimes with disastrous effects. Robust losses, such as the huber norm are designed to mitigate the effects of such contaminated data, but they are limited to the regression … Read more