Convergence analysis of a Lasserre hierarchy of upper bounds for polynomial minimization on the sphere

We study the convergence rate of a hierarchy of upper bounds for polynomial minimization prob-lems, proposed by Lasserre [SIAM J. Optim.21(3) (2011), pp.864-885], for the special case when the feasible set is the unit (hyper)sphere. The upper bound at level r of the hierarchy is defined as the minimal expected value of the polynomial over … Read more

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

We propose a new stochastic first-order algorithmic framework to solve stochastic composite nonconvex optimization problems that covers both finite-sum and expectation settings. Our algorithms rely on the SARAH estimator introduced in (Nguyen et al., 2017a) and consist of two steps: a proximal gradient and an averaging step making them different from existing nonconvex proximal-type algorithms. … Read more

Self-Concordance and Matrix Monotonicity with Applications to Quantum Entanglement Problems

Let $V$ be an Euclidean Jordan algebra and $\Omega$ be a cone of invertible squares in $V$. Suppose that $g:\mathbb{R}_{+} \to \mathbb{R}$ is a matrix monotone function on the positive semiaxis which naturally induces a function $\tilde{g}: \Omega \to V$. We show that $-\tilde{g}$ is compatible (in the sense of Nesterov-Nemirovski) with the standard self-concordant … Read more

RaBVItG:An Algorithm for Solving a Class of Multi-Players Feedback Nash Differential Games

In this work, we introduce a novel numerical algorithm, called RaBVItG (Radial Basis Value Iteration Game) to approximate feedback-Nash equilibria for deterministic differential games. More precisely, RaBVItG is an algorithm based on value iteration schemes in a meshfree context. It is used to approximate optimal feedback Nash policies for multi-players, trying to tackle the dimensionality … Read more

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

We provide non-asymptotic convergence rates of the Polyak-Ruppert averaged stochastic gradient descent (SGD) to a normal random vector for a class of twice-differentiable test functions. A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i.e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, … Read more

A Log-Barrier Newton-CG Method for Bound Constrained Optimization with Complexity Guarantees

We describe an algorithm based on a logarithmic barrier function, Newton’s method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of … Read more

Derivative-free optimization methods

In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black-box or simulation oracle that does not provide derivative information. Such settings necessitate the use of methods for derivative-free, or zeroth-order, optimization. We provide a review and perspectives on developments in … Read more

On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces

We show that the minimizers of regularized quadratic functions restricted to their natural Krylov spaces increase in Euclidean norm as the spaces expand. Citation Technical Report RAL-TR-2019-005, STFC-Rutherford Appleton Laboratory, Oxfordshire, England, April 5th 2019 Article Download View On monotonic estimates of the norm of the minimizers of regularized quadratic functions in Krylov spaces

Error estimates for iterative algorithms for minimizing regularized quadratic subproblems

We derive bounds for the objective errors and gradient residuals when finding approximations to the solution of common regularized quadratic optimization problems within evolving Krylov spaces. These provide upper bounds on the number of iterations required to achieve a given stated accuracy. We illustrate the quality of our bounds on given test examples. Citation Technical … Read more

A Delayed Weighted Gradient Method for Strictly Convex Quadratic Minimization

This paper develops an accelerated version of the steepest descent method by a two-step iteration. The new algorithm uses information with delay to define the iterations. Specifically, in the first step, a prediction of the new test point is calculated by using the gradient method with the exact minimal gradient steplength and then, a correction … Read more