Global non-asymptotic super-linear convergence rates of regularized proximal quasi-Newton methods on non-smooth composite problems

In this paper, we propose two regularized proximal quasi-Newton methods with symmetric rank-1 update of the metric (SR1 quasi-Newton) to solve non-smooth convex additive composite problems. Both algorithms avoid using line search or other trust region strategies. For each of them, we prove a super-linear convergence rate that is independent of the initialization of the … Read more

A Quasi-Newton Algorithm for Optimal Discretization of Markov Processes

In stochastic programming and stochastic-dynamic programming discretization of random model parameters is often unavoidable. We propose a quasi-Newton learning algorithm to discretize multi-dimensional, continuous discrete-time Markov processes to scenario lattices by minimizing the Wasserstein distance between the unconditional distributions of process and lattice. Scenario lattices enable accurate discretization of the conditional distributions of Markov processes … Read more

A Limited Memory Subspace Minimization Conjugate Gradient Algorithm for Unconstrained Optimization

Subspace minimization conjugate gradient (SMCG) methods are a class of high potential iterative methods for unconstrained optimization. The orthogonality is an important property of linear conjugate gradient method. It is however observed that the orthogonality of gradients in linear conjugate gradient method is often lost, which usually causes the slow convergence of conjugate gradient method. … Read more