Dynamic Stochastic Approximation for Multi-stage Stochastic Optimization

In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solving these types of stochastic optimization problems. We show that DSA can achieve an optimal ${\cal O}(1/\epsilon^4)$ rate of convergence in terms … Read more

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search … Read more

Improved proximal ADMM with partially parallel splitting for multi-block separable convex programming

For a type of multi-block separable convex programming raised in machine learning and statistical inference, we propose a proximal alternating direction method of multiplier (PADMM) with partially parallel splitting, which has the following nice properties: (1) To alleviate the weight of the proximal terms, the restrictions imposed on the proximal parameters are relaxed substantively; (2) … Read more

Iteration complexity on the Generalized Peaceman-Rachford splitting method for separable convex programming

Recently, a generalized version of Peaceman-Rachford splitting method (GPRSM) for solving a convex minmization model with a general separable structure has been proposed by \textbf{Sun} et al and its global convergence has been proved. In this paper, we further study theoretical aspects of the generalized Peaceman-Rachford splitting method. We first establish the worst-case $\mathcal{O}(1/t)$ convergence … Read more

Inexact cuts for Deterministic and Stochastic Dual Dynamic Programming applied to convex nonlinear optimization problems

We introduce an extension of Dual Dynamic Programming (DDP) to solve convex nonlinear dynamic programming equations. We call Inexact DDP (IDDP) this extension which applies to situations where some or all primal and dual subproblems to be solved along the iterations of the method are solved with a bounded error. We show that any accumulation … Read more

A faster dual algorithm for the Euclidean minimum covering ball problem

Dearing and Zeck presented a dual algorithm for the problem of the minimum covering ball in $\mathbb{R}^n$. Each iteration of their algorithm has a computational complexity of at least $\mathcal O(n^3)$. In this paper we propose a modification to their algorithm that, together with an implementation that uses updates to the QR factorization of a … Read more

On efficiently solving the subproblems of a level-set method for fused lasso problems

In applying the level-set method developed in [Van den Berg and Friedlander, SIAM J. on Scientific Computing, 31 (2008), pp.~890–912 and SIAM J. on Optimization, 21 (2011), pp.~1201–1229] to solve the fused lasso problems, one needs to solve a sequence of regularized least squares subproblems. In order to make the level-set method practical, we develop … Read more

Chambolle-Pock and Tseng’s methods: relationship and extension to the bilevel optimization

In the first part of the paper we focus on two problems: (a) regularized least squares and (b) nonsmooth minimization over an affine subspace. For these problems we establish the connection between the primal-dual method of Chambolle-Pock and Tseng’s proximal gradient method. For problem (a) it allows us to derive a nonergodic $O(1/k^2)$ convergence rate … Read more

Proximal ADMM with larger step size for two-block separable convex programs

The alternating direction method of multipliers (ADMM) is a benchmark for solving two-block separable convex programs, and it finds more and more applications in many areas. However, as other first-order methods, ADMM also suffers from low convergence. In this paper, to accelerate the convergence of ADMM, we relax the restriction region of the Fortin and … Read more

First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems

We focus on nonconvex and nonsmooth minimization problems with a composite objective, where the differentiable part of the objective is freed from the usual and restrictive global Lipschitz gradient continuity assumption. This longstanding smoothness restriction is pervasive in first order methods (FOM), and was recently circumvent for convex composite optimization by Bauschke, Bolte and Teboulle, … Read more