Faster stochastic cubic regularized Newton methods with momentum

Cubic regularized Newton (CRN) methods have attracted significant research interest because they offer stronger solution guarantees and lower iteration complexity. With the rise of the big-data era, there is growing interest in developing stochastic cubic regularized Newton (SCRN) methods that do not require exact gradient and Hessian evaluations. In this paper, we propose faster SCRN … Read more

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints

In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an \(\epsilon\)-expectedly feasible stochastic optimal solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance ϵ. However, in many practical applications, constraints must be nearly … Read more

Some Unified Theory for Variance Reduced Prox-Linear Methods

This work considers the nonconvex, nonsmooth problem of minimizing a composite objective of the form $f(g(x))+h(x)$ where the inner mapping $g$ is a smooth finite summation or expectation amenable to variance reduction. In such settings, prox-linear methods can enjoy variance-reduced speed-ups despite the existence of nonsmoothness. We provide a unified convergence theory applicable to a … Read more

Variance-reduced first-order methods for deterministically constrained stochastic nonconvex optimization with strong convergence guarantees

In this paper, we study a class of deterministically constrained stochastic optimization problems. Existing methods typically aim to find an \(\epsilon\)-stochastic stationary point, where the expected violations of both constraints and first-order stationarity are within a prescribed accuracy \(\epsilon\). However, in many practical applications, it is crucial that the constraints be nearly satisfied with certainty, … Read more

Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The structured saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting algorithm with loopless variance-reduction for solving this generic problem. We first prove the weak almost sure convergence of the iterates. We then demonstrate that our algorithm achieves … Read more

Adaptive Importance Sampling Based Surrogation Methods for Bayesian Hierarchical Models, via Logarithmic Integral Optimization

We explore Maximum a Posteriori inference of Bayesian Hierarchical Models (BHMs) with intractable normalizers, which are increasingly prevalent in contemporary applications and pose computational challenges when combined with nonconvexity and nondifferentiability. To address these, we propose the Adaptive Importance Sampling-based Surrogation method, which efficiently handles nonconvexity and nondifferentiability while improving the sampling approximation of the … Read more

Compromise Policy for Multi-stage Stochastic Linear Programming: Variance and Bias Reduction

This paper focuses on algorithms for multi-stage stochastic linear programming (MSLP). We propose an ensemble method named the “compromise policy”, which not only reduces the variance of the function approximation but also reduces the bias of the estimated optimal value. It provides a tight lower bound estimate with a confidence interval. By exploiting parallel computing, … Read more

Accelerating Stochastic Sequential Quadratic Programming for Equality Constrained Optimization using Predictive Variance Reduction

In this paper, we propose a stochastic variance reduction method for solving equality constrained optimization problems. Specifically, we develop a method based on the sequential quadratic programming paradigm that utilizes gradient approximations via predictive variance reduction techniques. Under reasonable assumptions, we prove that a measure of first-order stationarity evaluated at the iterates generated by our … Read more

Training Structured Neural Networks Through Manifold Identification and Variance Reduction

This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we … Read more

Stochastic Variance-Reduced Prox-Linear Algorithms for Nonconvex Composite Optimization

We consider the problem of minimizing composite functions of the form $f(g(x))+h(x)$, where~$f$ and~$h$ are convex functions (which can be nonsmooth) and $g$ is a smooth vector mapping. In addition, we assume that $g$ is the average of finite number of component mappings or the expectation over a family of random component mappings. We propose … Read more