Doubly stochastic primal dual splitting algorithm with variance reduction for saddle point problems

The (structured) saddle-point problem involving the infimal convolution in real Hilbert spaces finds applicability in many applied mathematics disciplines. For this purpose, we develop a stochastic primal-dual splitting (PDS) algorithm with loopless variance-reduction (VR) for solving this generic problem. A PDS algorithm aims to overcome the well-known shortcomings of common splitting methods by solving the … Read more

Adaptive Importance Sampling Based Surrogation Methods for Bayesian Hierarchical Models, via Logarithmic Integral Optimization

We explore Maximum a Posteriori inference of Bayesian Hierarchical Models (BHMs) with intractable normalizers, which are increasingly prevalent in contemporary applications and pose computational challenges when combined with nonconvexity and nondifferentiability. To address these, we propose the Adaptive Importance Sampling-based Surrogation method, which efficiently handles nonconvexity and nondifferentiability while improving the sampling approximation of the … Read more

Compromise Policy for Multi-stage Stochastic Linear Programming: Variance and Bias Reduction

This paper focuses on algorithms for multi-stage stochastic linear programming (MSLP). We propose an ensemble method named the “compromise policy”, which not only reduces the variance of the function approximation but also reduces the bias of the estimated optimal value. It provides a tight lower bound estimate with a confidence interval. By exploiting parallel computing, … Read more

Accelerating Stochastic Sequential Quadratic Programming for Equality Constrained Optimization using Predictive Variance Reduction

In this paper, we propose a stochastic variance reduction method for solving equality constrained optimization problems. Specifically, we develop a method based on the sequential quadratic programming paradigm that utilizes gradient approximations via predictive variance reduction techniques. Under reasonable assumptions, we prove that a measure of first-order stationarity evaluated at the iterates generated by our … Read more

Training Structured Neural Networks Through Manifold Identification and Variance Reduction

This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we … Read more

Stochastic Variance-Reduced Prox-Linear Algorithms for Nonconvex Composite Optimization

We consider the problem of minimizing composite functions of the form $f(g(x))+h(x)$, where~$f$ and~$h$ are convex functions (which can be nonsmooth) and $g$ is a smooth vector mapping. In addition, we assume that $g$ is the average of finite number of component mappings or the expectation over a family of random component mappings. We propose … Read more

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters

Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to … Read more

Stochastic Variance-Reduced Prox-Linear Algorithms for Nonconvex Composite Optimization

We consider minimization of composite functions of the form $f(g(x))+h(x)$, where $f$ and $h$ are convex functions (which can be nonsmooth) and $g$ is a smooth vector mapping. In addition, we assume that $g$ is the average of finite number of component mappings or the expectation over a family of random component mappings. We propose … Read more

Variance Reduction of Stochastic Gradients Without Full Gradient Evaluation

A standard concept for reducing the variance of stochastic gradient approximations is based on full gradient evaluations every now and then. In this paper an approach is considered that — while approximating a local minimizer of a sum of functions — also generates approximations of the gradient and the function values without relying on full … Read more

Inexact proximal stochastic second-order methods for nonconvex composite optimization

In this paper, we propose a framework of Inexact Proximal Stochastic Second-order (IPSS) methods for solving nonconvex optimization problems, whose objective function consists of an average of finitely many, possibly weakly, smooth functions and a convex but possibly nons- mooth function. At each iteration, IPSS inexactly solves a proximal subproblem constructed by using some positive … Read more