Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives

In this paper we study p-order methods for unconstrained minimization of convex functions that are p-times differentiable with $\nu$-Hölder continuous pth derivatives. We propose tensor schemes with and without acceleration. For the schemes without acceleration, we establish iteration complexity bounds of $\mathcal{O}\left(\epsilon^{-1/(p+\nu-1)}\right)$ for reducing the functional residual below a given $\epsilon\in (0,1)$. Assuming that $\nu$ … Read more

Stability Analysis for a Class of Sparse Optimization Problems

The sparse optimization problems arise in many areas of science and engineering, such as compressed sensing, image processing, statistical and machine learning. The $\ell_{0}$-minimization problem is one of such optimization problems, which is typically used to deal with signal recovery. The $\ell_{1}$-minimization method is one of the plausible approaches for solving the $\ell_{0}$-minimization problems, and … Read more

Scalable Preconditioning of Block-Structured Linear Algebra Systems using ADMM

We study the solution of block-structured linear algebra systems arising in optimization by using iterative solution techniques. These systems are the core computational bottleneck of many problems of interest such as parameter estimation, optimal control, network optimization, and stochastic programming. Our approach uses a Krylov solver (GMRES) that is preconditioned with an alternating method of … Read more

Trust-region methods for the derivative-free optimization of nonsmooth black-box functions

In this paper we study the minimization of a nonsmooth black-box type function, without assuming any access to derivatives or generalized derivatives and without any knowledge about the analytical origin of the function nonsmoothness. Directional methods have been derived for such problems but to our knowledge no model-based method like a trust-region one has yet … Read more

A Class of Stochastic Variance Reduced Methods with an Adaptive Stepsize

Stochastic variance reduced methods have recently surged into prominence for solving large scale optimization problems in the context of machine learning. Tan, Ma and Dai et al. first proposed the new stochastic variance reduced gradient (SVRG) method with the Barzilai-Borwein (BB) method to compute step sizes automatically, which performs well in practice. On this basis, … Read more

Convergence analysis of a Lasserre hierarchy of upper bounds for polynomial minimization on the sphere

We study the convergence rate of a hierarchy of upper bounds for polynomial minimization prob-lems, proposed by Lasserre [SIAM J. Optim.21(3) (2011), pp.864-885], for the special case when the feasible set is the unit (hyper)sphere. The upper bound at level r of the hierarchy is defined as the minimal expected value of the polynomial over … Read more

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

We propose a new stochastic first-order algorithmic framework to solve stochastic composite nonconvex optimization problems that covers both finite-sum and expectation settings. Our algorithms rely on the SARAH estimator introduced in (Nguyen et al., 2017a) and consist of two steps: a proximal gradient and an averaging step making them different from existing nonconvex proximal-type algorithms. … Read more

Self-Concordance and Matrix Monotonicity with Applications to Quantum Entanglement Problems

Let $V$ be an Euclidean Jordan algebra and $\Omega$ be a cone of invertible squares in $V$. Suppose that $g:\mathbb{R}_{+} \to \mathbb{R}$ is a matrix monotone function on the positive semiaxis which naturally induces a function $\tilde{g}: \Omega \to V$. We show that $-\tilde{g}$ is compatible (in the sense of Nesterov-Nemirovski) with the standard self-concordant … Read more

RaBVItG:An Algorithm for Solving a Class of Multi-Players Feedback Nash Differential Games

In this work, we introduce a novel numerical algorithm, called RaBVItG (Radial Basis Value Iteration Game) to approximate feedback-Nash equilibria for deterministic differential games. More precisely, RaBVItG is an algorithm based on value iteration schemes in a meshfree context. It is used to approximate optimal feedback Nash policies for multi-players, trying to tackle the dimensionality … Read more

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

We provide non-asymptotic convergence rates of the Polyak-Ruppert averaged stochastic gradient descent (SGD) to a normal random vector for a class of twice-differentiable test functions. A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i.e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, … Read more