Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation

We prove new convergence rates for a generalized version of stochastic Nesterov acceleration under interpolation conditions. Unlike previous analyses, our approach accelerates any stochastic gradient method which makes sufficient progress in expectation. The proof, which proceeds using the estimating sequences framework, applies to both convex and strongly convex functions and is easily specialized to accelerated … Read more

Model Construction for Convex-Constrained Derivative-Free Optimization

We develop a new approximation theory for linear and quadratic interpolation models, suitable for use in convex-constrained derivative-free optimization (DFO). Most existing model-based DFO methods for constrained problems assume the ability to construct sufficiently accurate approximations via interpolation, but the standard notions of accuracy (designed for unconstrained problems) may not be achievable by only sampling … Read more

The limitation of neural nets for approximation and optimization

We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems. While neural networks are widely used for machine learning tasks such as classification and regression, their application in solving optimization problems has been limited. Our study begins by determining the best activation function … Read more

Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron

Modern machine learning focuses on highly expressive models that are able to fit or interpolate the data completely, resulting in zero training loss. For such models, we show that the stochastic gradients of common loss functions satisfy a strong growth condition. Under this condition, we prove that constant step-size stochastic gradient descent (SGD) with Nesterov … Read more

Basis Pursuit Denoise with Nonsmooth Constraints

Level-set optimization formulations with data-driven constraints minimize a regularization functional subject to matching observations to a given error level. These formulations are widely used, particularly for matrix completion and sparsity promotion in data interpolation and denoising. The misfit level is typically measured in the l2 norm, or other smooth metrics. In this paper, we present … Read more

Optimization of univariate functions on bounded intervals by interpolation and semidefinite programming

We consider the problem of minimizing a univariate, real-valued function f on an interval [a,b]. When f is a polynomial, we review how this problem may be reformulated as a semidefinite programming (SDP) problem, and review how to extract all global minimizers from the solution of the SDP problem. For general f, we approximate the … Read more

Non Convergence Result for Conformal Approximation ofVariational Problems Subject to a Convexity Constraint

In this article, we are interested in the minimization of functionals in the set of convex functions. We investigate the discretization of the convexity through various numerical methods and find a geometrical obstruction confirmed by numerical simulations. We prove that there exist some convex functions that cannot be the limit of any conformal $P_1$ Finite … Read more